Method and apparatus for providing persistence to remote non-volatile memory

    公开(公告)号:US11847048B2

    公开(公告)日:2023-12-19

    申请号:US17031518

    申请日:2020-09-24

    Abstract: A processing device and methods of controlling remote persistent writes are provided. Methods include receiving an instruction of a program to issue a persistent write to remote memory. The methods also include logging an entry in a local domain when the persistent write instruction is received and providing a first indication that the persistent write will be persisted to the remote memory. The methods also include executing the persistent write to the remote memory and providing a second indication that the persistent write to the remote memory is completed. The methods also include providing the first and second indications when it is determined not to execute the persistent write according to global ordering and providing the second indication without providing the first indication when it is determined to execute the persistent write to remote memory according to global ordering.

    Using Epoch Counter Values for Controlling the Retention of Cache Blocks in a Cache

    公开(公告)号:US20230095461A1

    公开(公告)日:2023-03-30

    申请号:US17491478

    申请日:2021-09-30

    Inventor: Nuwan Jayasena

    Abstract: An electronic device includes a cache, a memory, and a controller. The controller stores an epoch counter value in metadata for a location in the memory when a cache block evicted from the cache is stored in the location. The controller also controls how the cache block is retained in the cache based at least in part on the epoch counter value when the cache block is subsequently retrieved from the location and stored in the cache.

    APPROACH FOR REDUCING SIDE EFFECTS OF COMPUTATION OFFLOAD TO MEMORY

    公开(公告)号:US20230004491A1

    公开(公告)日:2023-01-05

    申请号:US17364854

    申请日:2021-06-30

    Abstract: A technical solution to the technical problem of how to reduce the undesirable side effects of offloading computations to memory uses read hints to preload results of memory-side processing into a processor-side cache. A cache controller, in response to identifying a read hint in a memory-side processing instruction, causes results of the memory-side processing to be preloaded into a processor-side cache. Implementations include, without limitation, enabling or disabling the preloading based upon cache thrashing levels, preloading results, or portions of results, of memory-side processing to particular destination caches, preloading results based upon priority and/or degree of confidence, and/or during periods of low data bus and/or command bus utilization, last stores considerations, and enforcing an ordering constraint to ensure that preloading occurs after memory-side processing results are complete.

    SEMI-SORTING COMPRESSION WITH ENCODING AND DECODING TABLES

    公开(公告)号:US20220239315A1

    公开(公告)日:2022-07-28

    申请号:US17722931

    申请日:2022-04-18

    Abstract: A data processing platform, method, and program product perform compression and decompression of a set of data items. Suffix data and a prefix are selected for each respective data item in the set of data items based on data content of the respective data item. The set of data items is sorted based on the prefixes. The prefixes are encoded by querying multiple encoding tables to create a code word containing compressed information representing values of all prefixes for the set of data items. The code word and suffix data for each of the data items are stored in memory. The code word is decompressed to recover the prefixes. The recovered prefixes are paired with their respective suffix data.

    ADDRESS MAPPING-AWARE TASKING MECHANISM

    公开(公告)号:US20220206839A1

    公开(公告)日:2022-06-30

    申请号:US17135381

    申请日:2020-12-28

    Abstract: An Address Mapping-Aware Tasking (AMAT) mechanism manages compute task data and issues compute tasks on behalf of threads that created the compute task data. The AMAT mechanism stores compute task data generated by host threads in a set of partitions, where each partition is designated for a particular memory module. The AMAT mechanism maintains address mapping data that maps address information to partitions. Threads push compute task data to the AMAT mechanism instead of generating and issuing their own compute tasks. The AMAT mechanism uses address information included in the compute task data and the address mapping data to determine partitions in which to store the compute task data. The AMAT mechanism then issues compute tasks to be executed near the corresponding memory modules (i.e., in PIM execution units or NUMA compute nodes) based upon the compute task data stored in the partitions.

Patent Agency Ranking