METHOD TO MINIMIZE HOT/COLD PAGE DETECTION OVERHEAD ON RUNNING WORKLOADS

    公开(公告)号:US20230092541A1

    公开(公告)日:2023-03-23

    申请号:US17483195

    申请日:2021-09-23

    Abstract: Methods and apparatus to minimize hot/cold page detection overhead on running workloads. A page meta data structure is populated with meta data associated with memory pages in one or more far memory tier. In conjunction with one or more processes accessing memory pages to perform workloads, the page meta data structure is updated to reflect accesses to the memory pages. The page meta data, which reflects the current state of memory, is used to determine which pages are “hot” pages and which pages are “cold” pages, wherein hot pages are memory pages with relatively higher access frequencies and cold pages are memory pages with relatively lower access frequencies. Variations on the approach including filtering meta data updates on pages in memory regions of interest and applying a filter(s) to trigger meta data updates based on (a) condition(s). A callback function may also be triggered to be executed synchronously with memory page accesses.

    MEMORY THIN PROVISIONING USING MEMORY POOLS

    公开(公告)号:US20210200667A1

    公开(公告)日:2021-07-01

    申请号:US16727595

    申请日:2019-12-26

    Abstract: Examples described herein relate to memory thin provisioning in a memory pool of one or more dual in-line memory modules or memory devices. At any instance, any central processing unit (CPU) can request and receive a full virtual allocation of memory in an amount that exceeds the physical memory attached to the CPU (near memory). A remote pool of additional memory can be dynamically utilized to fill the gap between allocated memory and near memory. This remote pool is shared between multiple CPUs, with dynamic assignment and address re-mapping provided for the remote pool. To improve performance, the near memory can be operated as a cache of the pool memory. Inclusive or exclusive content storage configurations can be applied. An inclusive cache configuration can include an entry in a near memory cache also being stored in a memory pool whereas an exclusive cache configuration can provide an entry in either a near memory cache or in a memory pool but not both. Near memory cache management includes current data location tracking, access counting and other caching heuristics, eviction of data from near memory cache to pool memory and movement of data from pool memory to memory cache.

    SHARED MEMORY
    3.
    发明申请

    公开(公告)号:US20210081312A1

    公开(公告)日:2021-03-18

    申请号:US17103711

    申请日:2020-11-24

    Abstract: Examples described herein includes a network interface controller comprising a memory interface and a network interface, the network interface controller configurable to provide access to local memory and remote memory to a requester, wherein the network interface controller is configured with an amount of memory of different memory access speeds for allocation to one or more requesters. In some examples, the network interface controller is to grant or deny a memory allocation request from a requester based on a configuration of an amount of memory for different memory access speeds for allocation to the requester. In some examples, the network interface controller is to grant or deny a memory access request from a requester based on a configuration of memory allocated to the requester. In some examples, the network interface controller is to regulate quality of service of memory access requests from requesters.

    MEMORY AND STORAGE POOL INTERFACES

    公开(公告)号:US20210019069A1

    公开(公告)日:2021-01-21

    申请号:US17031721

    申请日:2020-09-24

    Abstract: Examples herein relate to a system capable of coupling to a remote memory pool, the system comprising: a memory controller and an interface to a connection, the interface coupled to the memory controller. In some examples, the interface is to translate a format of a memory access request to a format accepted by the memory controller and the memory controller is to provide the translated memory access request in a format accepted by a media. In some examples, a controller is to measure a number of addressable regions that are least accessed and cause at least one of the least accessed regions to be evicted to a local or remote memory device with relatively higher latency. In some examples, a remote access manager is to: determine if a region of addressable memory associated with a memory address for an access request is stored in the memory; based on the region of addressable memory associated with the memory address being stored in the memory, determine if a sub-region of addressable memory associated with the memory address is available for access from the memory, wherein the sub-region comprises less than an entirety of the region; and based on the sub-region of addressable memory being available for access from the memory, provide a physical address for use to access data from the sub-region in the memory and copy the data to the cache.

    APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT

    公开(公告)号:US20220283951A1

    公开(公告)日:2022-09-08

    申请号:US17751557

    申请日:2022-05-23

    Abstract: A method is described. The method includes determining that a memory page is in one of an active state and an idle state from meta data that is maintained for the memory page. The method includes recording a past history of active/idle state determinations that were previously made for the memory page. The method includes training a neural network on the past history of the memory page. The method includes using the neural network to predict one of a future active state and future idle state for the memory page. The method includes determining a location for the memory page based on the past history of the memory page and the predicted future state of the memory page, the location being one of a faster memory and a slower memory. The method includes moving the memory page to the location from the other one of the faster memory and the slower memory.

    DIRECT MEMORY ACCESS (DMA) ENGINE WITH NETWORK INTERFACE CAPABILITIES

    公开(公告)号:US20210105207A1

    公开(公告)日:2021-04-08

    申请号:US17103781

    申请日:2020-11-24

    Abstract: Examples described herein include one or more processors; a network interface; and a direct memory access (DMA) engine communicatively coupled to the one or more processors. In some examples, the DMA engine is to receive a DMA data access request and based on an address in the DMA data access request corresponding to a remote memory device, the DMA engine is to cause the network interface to generate at least one packet for transmission to the remote memory device. In some examples, the DMA data access request includes a source address, a destination address, and a length. In some examples, if the source address corresponds to a local memory device and the destination address corresponds to a remote memory device, the DMA engine is to cause the network interface to generate at least one packet for transmission to the remote memory device, wherein the at least one packet includes data stored at the source address.

    PACKET MULTI-CAST FOR MEMORY POOL REPLICATION

    公开(公告)号:US20210075633A1

    公开(公告)日:2021-03-11

    申请号:US17103674

    申请日:2020-11-24

    Abstract: Examples described herein relate to a network interface. In some examples, the network interface is to access data designated for transmission in at least one packet to multiple memory nodes by inclusion of an multicast identifier of a memory node group and transmit the at least one packet to a destination network device, wherein the multicast identifier of the memory node group in the at least one packet is to cause an intermediate network device to multicast the packet to multiple memory nodes. In some examples, a memory node comprises a memory pool that includes one or more of: volatile memory, non-volatile memory, or persistent memory. In some examples, the intermediate network device comprises a switch configured to determine network addresses of memory nodes associated with the multicast identifier of the memory node group.

    PAGE-BASED REMOTE MEMORY ACCESS USING SYSTEM MEMORY INTERFACE NETWORK DEVICE

    公开(公告)号:US20210073151A1

    公开(公告)日:2021-03-11

    申请号:US17103602

    申请日:2020-11-24

    Abstract: Examples described herein and includes at least one processor and a direct memory access (DMA) device. In some examples, the DMA device is to: access a command from a memory region allocated to receive commands for execution by the DMA device, wherein the command is to access content from a local memory device or remote memory node. In some examples, the DMA device is to: determine if the content is stored in a local memory device or a remote memory node based on a configuration that indicates whether a source address refers to a memory address associated with the local memory device or the remote memory node and whether a destination address refers to a memory address associated with the local memory device or the remote memory node. In some examples, the DMA device is to: copy the content from a local memory device or copy the content to the local memory device using a memory interface.

Patent Agency Ranking