-
公开(公告)号:US20220222010A1
公开(公告)日:2022-07-14
申请号:US17710657
申请日:2022-03-31
Applicant: Intel Corporation
Inventor: Alexander BACHMUTSKY , Francesc GUIM BERNAT , Karthik KUMAR , Marcos E. CARRANZA
IPC: G06F3/06
Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.
-
公开(公告)号:US20220197819A1
公开(公告)日:2022-06-23
申请号:US17691743
申请日:2022-03-10
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Thomas WILLHALM , Marcos E. CARRANZA , Cesar Ignacio MARTINEZ SPESSOT
IPC: G06F12/109 , G06F12/14
Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
-
公开(公告)号:US20220027278A1
公开(公告)日:2022-01-27
申请号:US17495454
申请日:2021-10-06
Applicant: Intel Corporation
Inventor: Piotr WYSOCKI , Francesc GUIM BERNAT , John J. BROWNE , Pawel ZAK , Rafal SZTEJNA , Przemyslaw PERYCZ , Timothy VERRALL , Szymon KONEFAL
IPC: G06F12/0862 , G06F9/48
Abstract: Examples include techniques for core-specific metrics collection. Examples include fetching metrics of a core of a multi-core processor from one or more registers responsive to scheduling of an event. The fetched metrics are pushed to a shared memory space of a memory that is accessible to a user-space application and accessible to other cores of the multi-core processor. The user-space application to access the shared memory space to aggregate core-specific metrics associated with at least the core of the multi-core processor and then publish the aggregated core-specific metrics.
-
公开(公告)号:US20210334138A1
公开(公告)日:2021-10-28
申请号:US17365898
申请日:2021-07-01
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Susanne M. BALLE , Slawomir PUTYRSKI , Rahul KHANNA , Paul DORMITZER
Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
-
105.
公开(公告)号:US20210288793A1
公开(公告)日:2021-09-16
申请号:US17332733
申请日:2021-05-27
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Kshitij A. DOSHI , Timothy VERRALL
IPC: H04L9/08 , G06F3/06 , G06F9/50 , H04L29/06 , H04L29/08 , G06F16/25 , G06F16/2453 , H04L12/861 , G11C8/12 , G11C29/02 , H04L12/24 , G06F30/34 , G11C29/36 , G11C29/38 , G11C29/44 , G06F16/22 , G06F16/2455 , G06F12/02 , G06F12/14 , G06F13/16 , G06F15/173 , G06F13/40 , G06F13/42 , G06F9/448 , G06F9/28 , G06F15/16 , H04L12/703 , H04L12/743 , H04L12/801 , H04L12/803 , H04L12/935 , H04L12/931 , G06F9/4401 , G06F9/445 , G06F12/06 , G06F16/23 , G06F16/248 , G06F16/901 , G06F16/11
Abstract: Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set. Other embodiments are also described and claimed.
-
公开(公告)号:US20210258265A1
公开(公告)日:2021-08-19
申请号:US17169073
申请日:2021-02-05
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR
IPC: H04L12/923 , H04L12/911 , H04L12/927 , G06F9/455 , G06F11/34
Abstract: Examples described herein relate to at least one processor that is to perform a command to build a container using multiple routines and allocate resources to at least one routine based on specification of a service level agreement (SLA) associated with each of the at least one routine. In some examples, the container is compatible with one or more of: Docker containers, Rkt containers, LXD containers, OpenVZ containers, Linux-VServer, Windows Containers, Hyper-V Containers, unikernels, or Java containers. In some examples, a service level is to specify one or more of: time to completion of a routine or resource allocation to the routine. In some examples, the resources include one or more of: cache allocation, memory allocation, memory bandwidth, network interface bandwidth, or accelerator allocation.
-
107.
公开(公告)号:US20210255897A1
公开(公告)日:2021-08-19
申请号:US17246441
申请日:2021-04-30
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Daniel RIVAS BARRAGAN , Kshitij A. DOSHI
Abstract: Technologies for opportunistic acceleration overprovisioning for disaggregated architectures that include multiple processors on one or more compute devices. The disaggregated architecture to also include a compute device that includes at least one accelerator device and acceleration management circuitry. The acceleration management circuitry receives a plurality of job execution requests. The acceleration management circuitry to overprovision one or more accelerators by scheduling two or more job execution requests from among the plurality of job execution requests for execution by each accelerator device.
-
公开(公告)号:US20210120077A1
公开(公告)日:2021-04-22
申请号:US17134374
申请日:2020-12-26
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Alexander BACHMUTSKY
Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.
-
公开(公告)号:US20210073161A1
公开(公告)日:2021-03-11
申请号:US17088513
申请日:2020-11-03
Applicant: Intel Corporation
Inventor: Susanne M. BALLE , Evan CUSTODIO , Francesc GUIM BERNAT , Sujoy SEN , Slawomir PUTYRSKI , Paul DORMITZER , Joseph GRECCO
Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
-
公开(公告)号:US20200004685A1
公开(公告)日:2020-01-02
申请号:US16568048
申请日:2019-09-11
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Slawomir PUTYRSKI , Susanne BALLE
IPC: G06F12/0862 , G06F12/0813 , G06F11/30
Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.
-
-
-
-
-
-
-
-
-