COMPUTING DEVICE WITH INDEPENDENTLY COHERENT NODES

    公开(公告)号:US20240377953A1

    公开(公告)日:2024-11-14

    申请号:US18641779

    申请日:2024-04-22

    Abstract: A computing device includes a system-on-a-chip. The computing device comprises a network interface controller (NIC) that hosts a plurality of virtual functions and physical functions. Two or more compute nodes are coupled to the NIC. Each compute node is configured to operate a plurality of Virtual Machines (VMs). Each VM is configured to operate in conjunction with a virtual function via a virtual function driver. A dedicated VM operates in conjunction with a virtual NIC using a physical function hosted by the NIC via a physical function driver hosted by the compute node. The computing device further comprises a fabric manager configured to own a physical function of the NIC, to bind virtual functions hosted by the NIC to individual compute nodes, and to pool I/O devices across the two or more compute nodes.

    MANAGING AND RANKING MEMORY RESOURCES
    2.
    发明公开

    公开(公告)号:US20240361947A1

    公开(公告)日:2024-10-31

    申请号:US18766119

    申请日:2024-07-08

    CPC classification number: G06F3/0653 G06F3/0608 G06F3/0673

    Abstract: The present disclosure relates to systems, methods, and computer-readable media for tracking memory usage data on a memory controller system and providing a mechanism whereby one or multiple accessing agents (e.g., computing nodes, applications, virtual machines) can access memory usage data for a memory resource managed by a memory controller. Indeed, the systems described herein facilitate generation of and access to heatmaps having memory usage data thereon. The systems described herein describe features and functionality related to generating and maintaining the heatmaps as well as providing access to the heatmaps to a variety of accessing agents. This memory tracking and accessing is performed using low processing overhead while providing useful information to accessing agents in connection with memory resources managed by a memory controller.

    ADDRESSING FOR DISAGGREGATED MEMORY POOL
    3.
    发明公开

    公开(公告)号:US20230315626A1

    公开(公告)日:2023-10-05

    申请号:US18024590

    申请日:2021-05-31

    CPC classification number: G06F12/0653 G06F12/0607 G06F9/5016

    Abstract: A method for memory address mapping in a disaggregated memory system includes receiving an indication of one or more ranges of host physical addresses (HPAs) from a compute node of a plurality of compute nodes, the one or more ranges of HPAs including a plurality of memory addresses corresponding to different allocation slices of the disaggregated memory pool that are allocated to the compute node. The one or more ranges of HPAs are converted into a contiguous range of device physical addresses (DPAs). For each DPA, a target address decoder (TAD) is identified based on a slice identifier and a slice-to-TAD index. Each DPA is mapped to a media-specific physical element of a physical memory unit of the disaggregated memory pool based on the TAD.

    DIRECT SWAP CACHING WITH ZERO LINE OPTIMIZATIONS

    公开(公告)号:US20240103876A1

    公开(公告)日:2024-03-28

    申请号:US18503869

    申请日:2023-11-07

    Abstract: Systems and methods related to direct swap caching with zero line optimizations are described. A method for managing a system having a near memory and a far memory comprises receiving a request from a requestor to read a block of data that is either stored in the near memory or the far memory. The method includes analyzing a metadata portion associated with the block of data, the metadata portion comprising: both (1) information concerning whether the near memory contains the block of data or whether the far memory contains the block of data and (2) information concerning whether a data portion associated with the block of data is all zeros. The method further includes instead of retrieving the data portion from the far memory, synthesizing the data portion corresponding to the block of data to generate a synthesized data portion and transmitting the synthesized data portion to the requestor.

    MANAGING AND RANKING MEMORY RESOURCES

    公开(公告)号:US20220164118A1

    公开(公告)日:2022-05-26

    申请号:US17102084

    申请日:2020-11-23

    Abstract: The present disclosure relates to systems, methods, and computer-readable media for managing tracked memory usage data and performing various actions based on memory usage data tracked by a memory controller on a memory device. For example, systems described herein involve collecting and compiling data across one or more memory controllers to evaluate characteristics of the memory usage data to determine hotness metric(s) for segments of a memory resource. The systems described herein may perform a variety of segment actions based on the hotness metric(s). In addition, the systems described herein can compile the memory usage data according to one or more access granularities. This compiled data may further be shared with multiple accessing agents in accordance with access resolutions of the respective accessing agents.

    POOLED MEMORY CONTROLLER FOR THIN-PROVISIONING DISAGGREGATED MEMORY

    公开(公告)号:US20220066928A1

    公开(公告)日:2022-03-03

    申请号:US17010548

    申请日:2020-09-02

    Abstract: A thin-provisioned multi-node computer system comprising a disaggregated memory pool and a pooled memory controller. The disaggregated memory pool is configured to make a shared memory capacity available to each of a plurality of compute nodes. The pooled memory controller is configured to assign, to each compute node of the plurality of compute nodes, a portion of the disaggregated memory pool such that a currently assigned total of assigned portions of the disaggregated memory pool is less than the shared memory capacity. The pooled memory controller is further configured to receive a request to assign an additional portion of the disaggregated memory pool such that the currently assigned total and the additional portion would exceed a predefined threshold amount of the shared memory capacity, to un-assign an assigned portion of the disaggregated memory pool, and assign the additional portion of the disaggregated memory pool.

    DETECTING AND MITIGATING MEMORY ATTACKS
    7.
    发明公开

    公开(公告)号:US20230385206A1

    公开(公告)日:2023-11-30

    申请号:US17828903

    申请日:2022-05-31

    CPC classification number: G06F12/1458 G06F21/554 G06F2212/1052

    Abstract: The present disclosure relates to systems and methods implemented on a memory controller for detecting and mitigating memory attacks (e.g., row hammer attacks). For example, a memory controller may track activations of row addresses within a memory hardware (e.g., a DRAM device) and determine whether a pattern of activations is indicative of a row hammer attack. This is determined using a counting mode for corresponding memory sub-banks. Where a likely row hammer attack is detected, the memory controller may activate a sampling mode (rather than the counting mode) for a particular sub-bank to identify which of the row addresses should be refreshed on the memory hardware. The implementations described herein provide a low computational cost alternative to heavy-handed detection mechanisms that require access to significant computing resources to accurately detect and mitigate row hammer attacks.

    DIRECT SWAP CACHING WITH NOISY NEIGHBOR MITIGATION AND DYNAMIC ADDRESS RANGE ASSIGNMENT

    公开(公告)号:US20230289288A1

    公开(公告)日:2023-09-14

    申请号:US17735767

    申请日:2022-05-03

    CPC classification number: G06F12/0802 G06F2212/62

    Abstract: Systems and methods related to direct swap caching with noisy neighbor mitigation and dynamic address range assignment are described. A system includes a host operating system (OS), configured to support a first set of tenants associated with a compute node, where the host OS has access to: (1) a first swappable range of memory addresses associated with a near memory and (2) a second swappable range of memory addresses associated with a far memory. The host OS is configured to allocate memory in a granular fashion such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set includes a first conflicting region associated with the first swappable range of memory addresses with the near memory and a second conflicting region associated with the second swappable range of memory addresses with the far memory.

    MEMORY TIERING TECHNIQUES IN COMPUTING SYSTEMS

    公开(公告)号:US20230143375A1

    公开(公告)日:2023-05-11

    申请号:US18154164

    申请日:2023-01-13

    Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.

    COMPUTING DEVICE WITH INDEPENDENTLY COHERENT NODES

    公开(公告)号:US20220075520A1

    公开(公告)日:2022-03-10

    申请号:US17016156

    申请日:2020-09-09

    Abstract: A computing device comprises two or more compute nodes, that each include two or more processor cores. Each compute node comprises an independently coherent domain that is not coherent with other compute nodes. A central TO die is communicatively coupled to each of the two or more compute nodes. A plurality of natively-attached volatile memory units are attached to the central TO die via one or more memory controllers. The central TO die includes one or more home agents for each compute node. The home agents are configured to map memory access requests received from the compute nodes to one or more addresses within the natively attached volatile memory units.

Patent Agency Ranking