-
公开(公告)号:US10990309B2
公开(公告)日:2021-04-27
申请号:US15721833
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Slawomir Putyrski
IPC: G06F9/46 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L12/24 , H04L29/08 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , H03M7/30 , H03M7/40 , H04L12/26 , H04L12/813 , H04L12/851 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L29/12 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L12/933 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H05K7/14 , H04L12/911 , G06F11/14 , H04L29/06 , G06F15/80
Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
-
公开(公告)号:US20190065260A1
公开(公告)日:2019-02-28
申请号:US15858316
申请日:2017-12-29
Applicant: Intel Corporation
Inventor: Susanne M. Balle , Evan Custodio , Francesc Guim Bernat , Slawomir Putyrski
Abstract: Technologies for scaling provisioning of kernel instances in a system as a function of a topology of accelerated kernels include a compute device having a compute engine. The compute engine receives, from a sled, a kernel configuration request to provision a kernel on an accelerator device. The sled is to execute a workload. The kernel accelerates a task in the workload. The compute engine determines, as a function of one or more requirements of the workload, a topology of kernels to service the request. The topology maps data communication between kernels. The compute engine configures the kernel on the accelerator device according to the determined topology. Other embodiments are also described and claimed
-
公开(公告)号:US20190065231A1
公开(公告)日:2019-02-28
申请号:US15859388
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Mark A. Schmisseur , Mohan J. Kumar , Murugasamy K. Nachimuthu , Slawomir Putyrski , Dimitrios Ziakas
Abstract: Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.
-
公开(公告)号:US10120727B2
公开(公告)日:2018-11-06
申请号:US15112339
申请日:2015-02-23
Applicant: Intel Corporation
Inventor: Katalin K. Bartfai-Walcott , Alexander Leckey , John Kennedy , Chris Woods , Giovani Estrada , Joseph Butler , Michael J. McGrath , Slawomir Putyrski
IPC: G06F9/46 , G06F15/177 , G06F15/173 , G06F9/50
Abstract: Examples may include techniques for allocating configurable computing resources from a pool of configurable computing resources to a logical server or virtual machine. The logical server or virtual machine may use allocated configurable computing resources to implement, execute or run a workload.
-
公开(公告)号:US10102035B2
公开(公告)日:2018-10-16
申请号:US15112309
申请日:2015-02-23
Applicant: Intel Corporation
Inventor: Katalin K. Bartfai-Walcott , John Kennedy , Thijs Metsch , Chris Woods , Giovani Estrada , Alexander Leckey , Joseph Butler , Slawomir Putyrski
IPC: G06F9/50
Abstract: Examples are described for computing resource discovery and management for a system of configurable computing resources that may include disaggregate physical elements such as central processing units, storage devices, memory devices, network input/output devices or network switches. In some examples, these disaggregate physical elements may be located within one or more racks of a data center.
-
公开(公告)号:US20250150362A1
公开(公告)日:2025-05-08
申请号:US19013402
申请日:2025-01-08
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Anna Drewek-Ossowicka , Dharmisha Ketankumar Doshi , Qian Li , Andrzej Kuriata , Andrew J. Herdrich , Teck Joo Goh , Daniel Richins , Slawomir Putyrski , Wenhui Shu , Long Cui , Jinshi Chen , Mihai Daniel Dodan
IPC: H04L41/5019 , G06F9/50
Abstract: Various approaches to efficiently allocating and utilizing hardware resources in data centers while maintaining compliance with a service level agreement are described. In various embodiments, an application-level service level objective (SLO) specified for a computational workload is translated into a hardware-level SLO to facilitate direct enforcement by the hardware processor, e.g., using a feedback control loop or model-based mapping of the hardware-level SLO to allocations of microarchitecture resources of the processor. In some embodiments, a computational model of the hardware behavior under resource contention is used to predict the application performance (e.g., as measured in terms of the hardware-level SLO) to be expected under certain contention scenarios. Scheduling of workloads among the compute nodes within the data center may be based on such predictions. In further embodiments, configurations of microservices are optimized to minimize hardware resources while meeting a specified performance goal.
-
27.
公开(公告)号:US11995330B2
公开(公告)日:2024-05-28
申请号:US17125420
申请日:2020-12-17
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Rahul Khanna , Slawomir Putyrski , Sujoy Sen , Paul Dormitzer
IPC: G06F3/06 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/38 , G06F9/4401 , G06F9/455 , G06F9/48 , G06F9/50 , G06F9/54 , G06F11/07 , G06F11/30 , G06F11/34 , G06F12/02 , G06F12/06 , G06F13/16 , G06F16/174 , G06F21/57 , G06F21/62 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/00 , H01R13/453 , H01R13/631 , H03K19/173 , H03M7/30 , H03M7/40 , H03M7/42 , H04L9/08 , H04L12/28 , H04L12/46 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/14 , G06F11/14 , G06F15/80 , G06F16/28 , H04L9/40 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04Q11/00
CPC classification number: G06F3/0641 , G06F3/0604 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/065 , G06F3/0653 , G06F3/067 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/5038 , G06F9/5044 , G06F9/505 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
-
公开(公告)号:US11861424B2
公开(公告)日:2024-01-02
申请号:US17471927
申请日:2021-09-10
Applicant: Intel Corporation
Inventor: Evan Custodio , Susanne M. Balle , Francesc Guim Bernat , Slawomir Putyrski , Joe Grecco , Henry Mitchel
IPC: G06F9/54 , G02B6/44 , G06F15/78 , H03K19/0175
CPC classification number: G06F9/545 , G02B6/444 , G06F9/541 , G06F9/544 , G06F15/7871 , H03K19/017581
Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
-
公开(公告)号:US11573900B2
公开(公告)日:2023-02-07
申请号:US16568048
申请日:2019-09-11
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Slawomir Putyrski , Susanne M. Balle
IPC: G06F12/00 , G06F12/0862 , G06F12/0813 , G06F11/30
Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.
-
公开(公告)号:US11531635B2
公开(公告)日:2022-12-20
申请号:US17088513
申请日:2020-11-03
Applicant: Intel Corporation
Inventor: Susanne M. Balle , Evan Custodio , Francesc Guim Bernat , Sujoy Sen , Slawomir Putyrski , Paul Dormitzer , Joseph Grecco
Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
-
-
-
-
-
-
-
-
-