-
1.
公开(公告)号:US11137922B2
公开(公告)日:2021-10-05
申请号:US15719770
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Rahul Khanna , Slawomir Putyrski , Sujoy Sen , Paul Dormitzer
IPC: G06F9/50 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L12/24 , H04L29/08 , G06F11/30 , H01R13/453 , G06F9/48 , H03M7/30 , H03M7/40 , H04L12/26 , H04L12/813 , H04L12/851 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L29/12 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L12/933 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H05K7/14 , H04L12/911 , G06F11/14 , H04L29/06 , G06F15/80
Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
-
2.
公开(公告)号:US20180150644A1
公开(公告)日:2018-05-31
申请号:US15721814
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Rahul Khanna , Susanne M. Balle , Francesc Guim Bernat , Sujoy Sen , Paul Dormitzer
IPC: G06F21/62 , G06F21/76 , H04L9/08 , G06F13/16 , H03K19/173
Abstract: Technologies for encrypted data access by field-programmable gate array (FPGA) user kernels include a computing device having an FPGA and an external memory device accessible by the FPGA. The FPGA includes a secure key store, a micro-encryption engine, and multiple slots for user kernels that are each identifiable with an index. A user kernel is programmed at an index and a symmetric encryption key is provisioned to the secure key store at the index. The micro encryption engine may read encrypted data from the external memory device, decrypt the encrypted data with the key associated with the index of the user kernel, and forward plain text data to the user kernel. The micro encryption engine may also receive plain text data from the user kernel, encrypt the plain text data with the key, and write the encrypted data to the external memory device. Other embodiments are described and claimed.
-
3.
公开(公告)号:US20210141552A1
公开(公告)日:2021-05-13
申请号:US17125420
申请日:2020-12-17
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Rahul Khanna , Slawomir Putyrski , Sujoy Sen , Paul Dormitzer
IPC: G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L12/24 , H04L29/08 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , H03M7/30 , H03M7/40 , H04L12/26 , H04L12/813 , H04L12/851 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L29/12 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L12/933 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H05K7/14
Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
-
公开(公告)号:US20190065253A1
公开(公告)日:2019-02-28
申请号:US15859370
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Susanne M. Balle , Slawomir Putyrski , Rahul Khanna , Paul Dormitzer
Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
-
5.
公开(公告)号:US20180150334A1
公开(公告)日:2018-05-31
申请号:US15719770
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry MItchel , Rahul Khanna , Slawomir Putyrski , Sujoy Sen , Paul Dormitzer
CPC classification number: G06F3/0641 , G06F3/0604 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/065 , G06F3/0653 , G06F3/067 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/4881 , G06F9/5038 , G06F9/505 , G06F9/544 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/1453 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3409 , G06F12/023 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F15/80 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/046 , H04L41/0816 , H04L41/0853 , H04L41/0896 , H04L41/12 , H04L41/142 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L47/78 , H04L49/104 , H04L61/2007 , H04L63/1425 , H04L67/10 , H04L67/1014 , H04L67/327 , H04L67/36 , H05K7/1452 , H05K7/1487
Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed
-
公开(公告)号:US10334334B2
公开(公告)日:2019-06-25
申请号:US15394338
申请日:2016-12-29
Applicant: INTEL CORPORATION
Inventor: Steven C. Miller , Michael Crocker , Aaron Gorius , Paul Dormitzer
IPC: G06F12/10 , H04Q11/00 , H03M7/40 , H03M7/30 , G06F16/901 , G06F3/06 , G11C7/10 , H05K7/14 , G06F1/18 , G06F13/40 , H05K5/02 , G08C17/02 , H04L12/24 , H04L29/08 , H04L12/26 , H04L12/851 , G06F9/50 , H04L12/911 , G06F12/109 , H04L29/06 , G11C14/00 , G11C5/02 , G11C11/56 , G02B6/44 , G06F8/65 , G06F12/14 , G06F13/16 , H04B10/25 , G06F9/4401 , G02B6/38 , G02B6/42 , B25J15/00 , B65G1/04 , H05K7/20 , H04L12/931 , H04L12/939 , H04W4/02 , H04L12/751 , G06F13/42 , H05K1/18 , G05D23/19 , G05D23/20 , H04L12/927 , H05K1/02 , H04L12/781 , H04Q1/04 , G06F12/0893 , H05K13/04 , G11C5/06 , G06F11/14 , G06F11/34 , G06F12/0862 , G06F15/80 , H04L12/919 , G06Q10/06 , G07C5/00 , H04L12/28 , H04L29/12 , H04L9/06 , H04L9/14 , H04L9/32 , H04L12/933 , H04L12/947 , H04L12/811 , G06F17/30 , H04W4/80 , G06Q10/08 , G06Q10/00 , G06Q50/04
Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
-
7.
公开(公告)号:US20190068444A1
公开(公告)日:2019-02-28
申请号:US15859363
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Joe Grecco , Sujoy Sen , Francesc Guim Bernat , Susanne M. Balle , Evan Custodio , Paul Dormitzer , Henry Mitchel
IPC: H04L12/24
Abstract: Technologies for providing efficient transfer of results from remote accelerator devices include a compute sled. The compute sled is to send a request to utilize an accelerator device on an accelerator sled. The request includes a data object to be processed by the accelerator device to increase the speed of execution of a workload associated with the data object. The compute sled is also to receive a modification map from the accelerator sled indicative of a modification to the data object. Further, the compute sled is to determine the modification to the data object based on the modification map and apply the modification to the data object in a memory device of the compute sled.
-
8.
公开(公告)号:US20190034102A1
公开(公告)日:2019-01-31
申请号:US15856220
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: Steven MIller , Paul Dormitzer
IPC: G06F3/06
Abstract: Technologies for allocating data storage capacity on a data storage sled include a plurality of data storage devices communicatively coupled to a plurality of network switches through a plurality of physical network connections and a data storage controller connected to the plurality of data storage devices. The data storage controller is to determine a target storage resource allocation to be used by one or more applications to be executed by one or more sleds in a data center, determine data storage capacity available for each of a plurality of different data storage types on the data storage sled, wherein each data storage type is associated with a different level of data redundancy, determine an amount of data storage capacity for each data storage type to be allocated to satisfy the target storage resource allocation, and adjust the amount of data storage capacity allocated to each data storage type.
-
公开(公告)号:US10091904B2
公开(公告)日:2018-10-02
申请号:US15394321
申请日:2016-12-29
Applicant: INTEL CORPORATION
Inventor: Steven C. Miller , Michael Crocker , Aaron Gorius , Paul Dormitzer
IPC: H05K7/14
Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The dual-mode optical network interface circuitry can have a bandwidth equal to or greater than the storage devices.
-
公开(公告)号:US11579788B2
公开(公告)日:2023-02-14
申请号:US16943221
申请日:2020-07-30
Applicant: Intel Corporation
Inventor: Henry Mitchel , Joe Grecco , Sujoy Sen , Francesc Guim Bernat , Susanne M. Balle , Evan Custodio , Paul Dormitzer
IPC: G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H04L61/5007 , H04L67/63 , H04L67/75 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H04L47/78 , G06F16/28 , H04Q11/00 , G06F11/14 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L9/40 , G06F15/80
Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
-
-
-
-
-
-
-
-
-