ADVANCED INTERLEAVING TECHNIQUES FOR FABRIC BASED POOLING ARCHITECTURES

    公开(公告)号:US20220222010A1

    公开(公告)日:2022-07-14

    申请号:US17710657

    申请日:2022-03-31

    Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.

    DYNAMIC LOAD BALANCING FOR POOLED MEMORY

    公开(公告)号:US20220197819A1

    公开(公告)日:2022-06-23

    申请号:US17691743

    申请日:2022-03-10

    Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.

    RESOURCE MANAGEMENT FOR COMPONENTS OF A VIRTUALIZED EXECUTION ENVIRONMENT

    公开(公告)号:US20210258265A1

    公开(公告)日:2021-08-19

    申请号:US17169073

    申请日:2021-02-05

    Abstract: Examples described herein relate to at least one processor that is to perform a command to build a container using multiple routines and allocate resources to at least one routine based on specification of a service level agreement (SLA) associated with each of the at least one routine. In some examples, the container is compatible with one or more of: Docker containers, Rkt containers, LXD containers, OpenVZ containers, Linux-VServer, Windows Containers, Hyper-V Containers, unikernels, or Java containers. In some examples, a service level is to specify one or more of: time to completion of a routine or resource allocation to the routine. In some examples, the resources include one or more of: cache allocation, memory allocation, memory bandwidth, network interface bandwidth, or accelerator allocation.

    REMOTE MEMORY OPERATIONS
    56.
    发明申请

    公开(公告)号:US20180004687A1

    公开(公告)日:2018-01-04

    申请号:US15201373

    申请日:2016-07-01

    Abstract: An extension of node architecture and proxy requests enables a node to expose memory computation capability to remote nodes. A remote node can request execution of an operation by a remote memory computation resource, and the remote memory computation resource can execute the request locally and return the results of the computation. The node includes processing resources, a fabric interface, and a memory subsystem including a memory computation resource. The local execution of the request by the memory computation resource can reduce latency and bandwidth concerns typical with remote requests.

Patent Agency Ranking