SOFTWARE-DEFINED COHERENT CACHING OF POOLED MEMORY

    公开(公告)号:US20210064531A1

    公开(公告)日:2021-03-04

    申请号:US17092803

    申请日:2020-11-09

    IPC分类号: G06F12/0817

    摘要: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory. In connection with DM cache operations, such as cache lines evicted from a CPU, logic implemented in hardware determines whether a cache line in a DM cache is to be convicted and implements the software-defined caching policy for the DM cache including associated memory coherency operations.

    AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS

    公开(公告)号:US20200026575A1

    公开(公告)日:2020-01-23

    申请号:US16586576

    申请日:2019-09-27

    IPC分类号: G06F9/50

    摘要: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.

    PLATFORM AMBIENT DATA MANAGEMENT SCHEMES FOR TIERED ARCHITECTURES

    公开(公告)号:US20200319696A1

    公开(公告)日:2020-10-08

    申请号:US16907264

    申请日:2020-06-21

    摘要: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated. In one aspect, machine learning models trained on historical data are employed to project harvested energy levels that are used in detecting energy threshold conditions.

    CONCEPT FOR ACCESSING COMPUTER MEMORY OF A MEMORY POOL

    公开(公告)号:US20190235773A1

    公开(公告)日:2019-08-01

    申请号:US16378828

    申请日:2019-04-09

    IPC分类号: G06F3/06 G06F21/62

    摘要: Examples relate to a memory controller or memory controller device for a memory pool of a computer system, to a management apparatus or management device for the computer system, and to an apparatus or device for a compute node of the computer system, and to corresponding methods and computer programs. The memory pool comprises computer memory that is accessible to a plurality of compute nodes of the computer system via the memory controller. The memory controller comprises interface circuitry for communicating with the plurality of compute nodes. The memory controller comprises control circuitry being configured to obtain an access control instruction via the interface circuitry. The access control instruction indicates that access to a portion of the computer memory of the memory pool is to be granted to one or more processes being executed by the plurality of compute nodes of the computer system. The access control instruction comprises information related to a node identifier and a process identifier for each of the one or more processes. The control circuitry is configured to provide access to the portion of the computer memory of the memory pool to the one or more processes based on the access control instruction.