-
公开(公告)号:US20240214279A1
公开(公告)日:2024-06-27
申请号:US18433291
申请日:2024-02-05
Applicant: Intel Corporation
Inventor: Matthew J. ADILETTA , Zane BALL , Susanne M. BALLE , Patrick CONNOR
IPC: H04L41/5019 , H04L41/16 , H04L43/0823
CPC classification number: H04L41/5019 , H04L41/16 , H04L43/0823
Abstract: Examples described herein relate to determining whether to process or not process data based on a reliability metric. For example, based on receiving a response to a request to a first microservice, with the reliability metric, from one or more servers, a decision can be made of whether to process, by a second microservice, a result associated with the response based on the reliability metric. In some examples, the reliability metric comprises an indicator of memory health and computational accuracy.
-
公开(公告)号:US20220321438A1
公开(公告)日:2022-10-06
申请号:US17733086
申请日:2022-04-29
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Susanne M. BALLE , Rahul KHANNA , Sujoy SEN , Karthik KUMAR
IPC: H04L43/08 , G06F16/901 , H04B10/25 , G02B6/38 , G02B6/42 , G02B6/44 , G06F1/18 , G06F1/20 , G06F3/06 , G06F8/65 , G06F9/30 , G06F9/4401 , G06F9/54 , G06F12/109 , G06F12/14 , G06F13/16 , G06F13/40 , G08C17/02 , G11C5/02 , G11C7/10 , G11C11/56 , G11C14/00 , H03M7/30 , H03M7/40 , H04L41/14 , H04L43/0817 , H04L43/0876 , H04L43/0894 , H04L49/00 , H04L49/25 , H04L49/356 , H04L49/45 , H04L67/02 , H04L67/306 , H04L69/04 , H04L69/329 , H04Q11/00 , H05K7/14 , G06F15/16
Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
-
公开(公告)号:US20220113911A1
公开(公告)日:2022-04-14
申请号:US17558268
申请日:2021-12-21
Applicant: Intel Corporation
Inventor: Andrzej KURIATA , Susanne M. BALLE , Duane E. GALBI , Sundar NADATHUR , Nagabhushan CHITLUR , Francesc GUIM BERNAT , Alexander BACHMUTSKY
Abstract: Methods, apparatus, and software for remote storage of hardware microservices hosted on other processing units (XPUs) and SOC-XPU Platforms. The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). Software, via execution on the SOC, enables the platform to pre-provision storage space on a remote storage node and assign the storage space to the platform, wherein the pre-provisioned storage space includes one or more container images to be implemented as one or more hardware (HW) microservice front-ends. The XPU/FPGA is configured to implement one or more accelerator functions used to accelerate HW microservice backend operations that are offloaded from the one or more HW microservice front-ends. The platform is also configured to pre-provision a remote storage volume containing worker node components and access and persistently store worker node components.
-
公开(公告)号:US20230185760A1
公开(公告)日:2023-06-15
申请号:US17549727
申请日:2021-12-13
Applicant: Intel Corporation
Inventor: Susanne M. BALLE , Duane E. GALBI , Andrzej KURIATA , Sundar NADATHUR , Nagabhushan CHITLUR , Francesc GUIM BERNAT , Alexander BACHMUTSKY
IPC: G06F15/78
CPC classification number: G06F15/7889 , G06F15/7821 , G06F15/7871 , G06F2015/768
Abstract: Methods, apparatus, and software and for hardware microservices accelerated in other processing units (XPUs). The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). The FPGA is configured to implement one or more Hardware (HW) accelerator functions associated with HW microservices. Execution of microservices is split between a software front-end that executes on the SOC and a hardware backend comprising the HW accelerator functions. The software front-end offloads a portion of a microservice and/or associated workload to the HW microservice backend implemented by the accelerator functions. An XPU or FPGA proxy is used to provide the microservice front-ends with shared access to HW accelerator functions, and schedules/multiplexes access to the HW accelerator functions using, e.g., telemetry data generated by the microservice front-ends and/or the HW accelerator functions. The platform may be an infrastructure processing unit (IPU) configured to accelerate infrastructure operations.
-
公开(公告)号:US20230023229A1
公开(公告)日:2023-01-26
申请号:US17952835
申请日:2022-09-26
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Alexander BACHMUTSKY , Susanne M. BALLE , Andrzej KURIATA , Nagabhushan CHITLUR
IPC: G06F3/06
Abstract: In a server system, a host computing platform can have a processing unit separate from the host processor to detect and respond to failure of the host processor. The host computing platform includes a memory to store data for the host processor. The processing unit has an interface to the host processor and the memory and an interface to a network external to the host processor and has access to the memory. In response to detection of failure of the host processor, the processing unit migrates data from the memory to another memory or storage.
-
公开(公告)号:US20220206864A1
公开(公告)日:2022-06-30
申请号:US17694516
申请日:2022-03-14
Applicant: Intel Corporation
Inventor: Sundar NADATHUR , Susanne M. BALLE , Andrzej KURIATA , Duane E. GALBI , Nagabhushan CHITLUR , Francesc GUIM BERNAT , Alexander BACHMUTSKY
Abstract: Examples described herein relate to causing execution of a workload on a device based on characteristics of the device and based on metadata associated with the device identifying execution requirements and software and hardware compatibilities between the device and a platform environment. In some examples, an accelerator device is selected to execute a workload based on characteristics of the accelerator device and based on software and hardware compatibilities between the device and a platform environment of the accelerator device.
-
公开(公告)号:US20230344894A1
公开(公告)日:2023-10-26
申请号:US18216524
申请日:2023-06-29
Applicant: Intel Corporation
Inventor: Susanne M. BALLE , Shihwei CHIEN , Andrzej KURIATA , Nagabhushan CHITLUR
IPC: H04L67/025
CPC classification number: H04L67/025
Abstract: An apparatus is described. The apparatus includes a host side interface to couple to one or more central processing units (CPUs) that support multiple microservice endpoints. The apparatus includes a network interface to receive from a network a packet having multiple frames that belong to different streams, the multiple frames formatted according to a text transfer protocol. The apparatus includes circuitry to: process the frames according to the text transfer protocol and build content of a microservice functional call embedded within a message that one of the frames transports; and, execute the microservice function call.
-
公开(公告)号:US20220382944A1
公开(公告)日:2022-12-01
申请号:US17327210
申请日:2021-05-21
Applicant: Intel Corporation
Inventor: Han YIN , Xiaotong SUN , Susanne M. BALLE
IPC: G06F30/331 , G06F13/12 , H04L29/12
Abstract: Methods and apparatus for an extended inter-kernel communication protocol for discovery of accelerator pools configured in a non-star mode. Under a discovery algorithm, discovery requests are sent from a root node to non-root nodes in the accelerator pool using an inter-kernel communication protocol comprising a data transmission protocol built over a Media Access Control (MAC) layer and transported over links coupled between IO ports on accelerators. The discovery requests are used to discover each of the nodes in the accelerator pool and determine the topology of the nodes. During this process, MAC address table entries are generated at the various nodes comprising (key, value) pairs of MAC IO port addresses identifying destination nodes and that may be reached by each node and the shortest path to reach such destination nodes. The discovery algorithm may also be used to discover storage related information for the accelerators. The accelerators may comprise FPGAs or other processing units, such as GPUs and Vector Processing Units (VPUs).
-
公开(公告)号:US20210314245A1
公开(公告)日:2021-10-07
申请号:US17235135
申请日:2021-04-20
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Susanne M. BALLE , Rahul KHANNA , Sujoy SEN , Karthik KUMAR
IPC: H04L12/26 , G06F16/901 , H04B10/25 , G02B6/38 , G02B6/42 , G02B6/44 , G06F1/18 , G06F1/20 , G06F3/06 , G06F8/65 , G06F9/30 , G06F9/4401 , G06F9/54 , G06F12/109 , G06F12/14 , G06F13/16 , G06F13/40 , G08C17/02 , G11C5/02 , G11C7/10 , G11C11/56 , G11C14/00 , H03M7/30 , H03M7/40 , H04L12/24 , H04L12/931 , H04L12/947 , H04L29/08 , H04L29/06 , H04Q11/00 , H05K7/14 , G06F15/16
Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
-
公开(公告)号:US20210105197A1
公开(公告)日:2021-04-08
申请号:US17086206
申请日:2020-10-30
Applicant: Intel Corporation
Inventor: Susanne M. BALLE , Rahul KHANNA , Nishi AHUJA , Mrittika GANGULI
IPC: H04L12/26 , G06F16/901 , H04B10/25 , G02B6/38 , G02B6/42 , G02B6/44 , G06F1/18 , G06F1/20 , G06F3/06 , G06F8/65 , G06F9/30 , G06F9/4401 , G06F9/54 , G06F12/109 , G06F12/14 , G06F13/16 , G06F13/40 , G08C17/02 , G11C5/02 , G11C7/10 , G11C11/56 , G11C14/00 , H03M7/30 , H03M7/40 , H04L12/24 , H04L12/931 , H04L12/947 , H04L29/08 , H04L29/06 , H04Q11/00 , H05K7/14
Abstract: Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.
-
-
-
-
-
-
-
-
-