Freshness and gravity of data operators executing in near memory compute in scalable disaggregated memory architectures

    公开(公告)号:US12236240B2

    公开(公告)日:2025-02-25

    申请号:US18181307

    申请日:2023-03-09

    Abstract: The disclosure provides for systems and methods for improving bandwidth and latency associated with executing data requests in disaggregated memory by leveraging usage indicators (also referred to as usage value), such as “freshness” of data operators and processing “gravity” of near memory compute functions. Examples of the systems and methods disclosed herein generate data operators comprising near memory compute functions offloaded proximate to disaggregated memory nodes, assign a usage value to each data operator based on at least one of: (i) a freshness indicator for each data operators, and (ii) a gravity indicator for each near memory compute function; and allocate data operations to the data operators based on the usage value.

    Methods and Systems for Computing in Memory

    公开(公告)号:US20210049125A1

    公开(公告)日:2021-02-18

    申请号:US17072918

    申请日:2020-10-16

    Abstract: A method of computing in memory, the method including inputting a packet including data into a computing memory unit having a control unit, loading the data into at least one computing in memory micro-unit, processing the data in the computing in memory micro-unit, and outputting the processed data. Also, a computing in memory system including a computing in memory unit having a control unit, wherein the computing in memory unit is configured to receive a packet having data and a computing in memory micro-unit disposed in the computing in memory unit, the computing in memory micro-unit having at least one of a memory matrix and a logic elements matrix.

    Methods and Systems for Computing in Memory
    4.
    发明申请

    公开(公告)号:US20200097440A1

    公开(公告)日:2020-03-26

    申请号:US16139913

    申请日:2018-09-24

    Abstract: A method of computing in memory, the method including inputting a packet including data into a computing memory unit having a control unit, loading the data into at least one computing in memory micro-unit, processing the data in the computing in memory micro-unit, and outputting the processed data. Also, a computing in memory system including a computing in memory unit having a control unit, wherein the computing in memory unit is configured to receive a packet having data and a computing in memory micro-unit disposed in the computing in memory unit, the computing in memory micro-unit having at least one of a memory matrix and a logic elements matrix.

    FEDERATED MANAGEMENT OF DATA OPERATORS ON NEAR-MEMORY COMPUTE NODES

    公开(公告)号:US20240385759A1

    公开(公告)日:2024-11-21

    申请号:US18317608

    申请日:2023-05-15

    Abstract: Examples described herein relate to federated management of data operators across multiple near-memory compute (NMC) nodes attached to memory devices in a network-attached memory system. Federated management includes loading, executing, and scaling data operators across the multiple NMC nodes together as a group. Examples include receiving a data access request from a client application and loading data operators in the multiple NMC nodes based on a data access pattern associated with the data access request. Examples include scaling the data operators based on performance metrics for the data operators or the multiple NMC nodes in correlation with client application performance. The multiple NMC nodes may dynamically scale the data operators based on request-load, execution frequency of data operators, resource availability, or other scaling strategies. Examples also include loading and scaling the data operators based on one or more of request characteristics or data operator characteristics.

    CAPABILITY ENFORCEMENT PROCESSORS
    7.
    发明申请

    公开(公告)号:US20190065408A1

    公开(公告)日:2019-02-28

    申请号:US15693149

    申请日:2017-08-31

    Abstract: Example implementations relate to a capability enforcement processor. In an example, a capability enforcement processor may be interposed between a memory that stores data accessible via capabilities and a system processor that executes processes. The capability enforcement processor intercepts a memory request from the system processor and enforces the memory request based on capability enforcement processor capabilities maintained in per-process capability spaces of the capability enforcement processor.

Patent Agency Ranking