-
公开(公告)号:US20220222010A1
公开(公告)日:2022-07-14
申请号:US17710657
申请日:2022-03-31
Applicant: Intel Corporation
Inventor: Alexander BACHMUTSKY , Francesc GUIM BERNAT , Karthik KUMAR , Marcos E. CARRANZA
IPC: G06F3/06
Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.
-
公开(公告)号:US20220197819A1
公开(公告)日:2022-06-23
申请号:US17691743
申请日:2022-03-10
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Thomas WILLHALM , Marcos E. CARRANZA , Cesar Ignacio MARTINEZ SPESSOT
IPC: G06F12/109 , G06F12/14
Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
-
公开(公告)号:US20210258265A1
公开(公告)日:2021-08-19
申请号:US17169073
申请日:2021-02-05
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR
IPC: H04L12/923 , H04L12/911 , H04L12/927 , G06F9/455 , G06F11/34
Abstract: Examples described herein relate to at least one processor that is to perform a command to build a container using multiple routines and allocate resources to at least one routine based on specification of a service level agreement (SLA) associated with each of the at least one routine. In some examples, the container is compatible with one or more of: Docker containers, Rkt containers, LXD containers, OpenVZ containers, Linux-VServer, Windows Containers, Hyper-V Containers, unikernels, or Java containers. In some examples, a service level is to specify one or more of: time to completion of a routine or resource allocation to the routine. In some examples, the resources include one or more of: cache allocation, memory allocation, memory bandwidth, network interface bandwidth, or accelerator allocation.
-
公开(公告)号:US20210120077A1
公开(公告)日:2021-04-22
申请号:US17134374
申请日:2020-12-26
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Alexander BACHMUTSKY
Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.
-
55.
公开(公告)号:US20190384837A1
公开(公告)日:2019-12-19
申请号:US16012515
申请日:2018-06-19
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Thomas WILLHALM , Mark A. SCHMISSEUR , Benjamin GRANIELLO
IPC: G06F17/30 , G06F11/14 , G06F12/0804 , G06F12/02
Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
-
公开(公告)号:US20180004687A1
公开(公告)日:2018-01-04
申请号:US15201373
申请日:2016-07-01
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Thomas WILLHALM , Narayan RANGANATHAN , Pete D. VOGT
IPC: G06F13/16 , G06F13/42 , G06F13/40 , H04L29/08 , H04L12/803
Abstract: An extension of node architecture and proxy requests enables a node to expose memory computation capability to remote nodes. A remote node can request execution of an operation by a remote memory computation resource, and the remote memory computation resource can execute the request locally and return the results of the computation. The node includes processing resources, a fabric interface, and a memory subsystem including a memory computation resource. The local execution of the request by the memory computation resource can reduce latency and bandwidth concerns typical with remote requests.
-
-
-
-
-