-
公开(公告)号:US20220222118A1
公开(公告)日:2022-07-14
申请号:US17710594
申请日:2022-03-31
Applicant: Intel Corporation
Inventor: Ren WANG , Christian MACIOCCO , Yipeng WANG , Kshitij A. DOSHI , Vesh Raj SHARMA BANJADE , Satish C. JHA , S M Iftekharul ALAM , Srikathyayani SRIKANTESWARA , Alexander BACHMUTSKY
IPC: G06F9/50 , G06F13/42 , G06F12/1045
Abstract: Methods, apparatus, and systems for adaptive collaborative memory with the assistance of programmable networking devices. Under one example, the programmable networking device is a switch that is deployed in a system or cluster of servers comprising a plurality of nodes. The switch selects one or more nodes to be remote memory server nodes and allocate one or more portions of memory on those nodes to be used as remote memory for one or more remote memory client nodes. The switch receives memory access request messages originating from remote memory client nodes containing indicia identifying memory to be accessed, determines which remote memory server node is to be used for servicing a given memory access request, and sends a memory access request message containing indicia identifying memory to be accessed to the remote memory server node that is determined. The switch also facilitates return of messages containing remote memory access responses to the client nodes.
-
公开(公告)号:US20220179805A1
公开(公告)日:2022-06-09
申请号:US17441668
申请日:2019-06-21
Applicant: Intel Corporation
Inventor: Jiayu HU , Ren WANG , Cunming LIANG
Abstract: Examples include a computing system having a direct memory access (DMA) engine pipeline, a plurality of processing cores, each processing core including a core pipeline, and a memory coupled to the DMA engine pipeline and the plurality of processing cores. The computing system includes a pipeline selector coupled to the plurality of processing cores and the DMA engine pipeline, the pipeline selector to, during initialization, determine at least one threshold for pipeline selection for the computing system, and during runtime, select one of the core pipelines or the DMA engine pipeline to execute a memory copy operation in the memory based at least in part on the at least one threshold.
-
公开(公告)号:US20210089216A1
公开(公告)日:2021-03-25
申请号:US17099653
申请日:2020-11-16
Applicant: Intel Corporation
Inventor: Yipeng WANG , Ren WANG , Sameh GOBRIEL , Tsung-Yuan C. TAI
IPC: G06F3/06 , G06F12/128 , H04L12/747 , G06F12/0875
Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
-
公开(公告)号:US20190102346A1
公开(公告)日:2019-04-04
申请号:US16207065
申请日:2018-11-30
Applicant: Intel Corporation
Inventor: Ren WANG , Andrew J. HERDRICH , Tsung-Yuan C. TAI , Yipeng WANG , Raghu KONDAPALLI , Alexander BACHMUTSKY , Yifan YUAN
IPC: G06F16/901 , G06F16/903
Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.
-
-
-