PARTITIONABLE TERNARY CONTENT ADDRESSABLE MEMORY (TCAM) FOR USE WITH A BLOOM FILTER
    11.
    发明申请
    PARTITIONABLE TERNARY CONTENT ADDRESSABLE MEMORY (TCAM) FOR USE WITH A BLOOM FILTER 审中-公开
    可分离的内容可寻址存储器(TCAM),用于BLOOM FILTER

    公开(公告)号:US20170046395A1

    公开(公告)日:2017-02-16

    申请号:US15305960

    申请日:2014-04-30

    Abstract: A bit vector for a Bloom filter is determined by performing one or more hash function operations on a set of ternary content addressable memory (TCAM) words. A TCAM array is partitioned into a first portion to store the bit vector for the Bloom filter and a second portion to store the set of TCAM words. The TCAM array can be searched using a search word by performing the one or more hash function operations on the search word to generate a hashed search word and determining whether bits at specified positions of the hashed search word match bits at corresponding positions of the bit vector stored in the first portion of the TCAM array before searching the second portion of the TCAM array with the search word.

    Abstract translation: 通过对一组三元内容可寻址存储器(TCAM)字进行一个或多个散列函数操作来确定布隆过滤器的位向量。 将TCAM阵列划分为第一部分以存储布隆过滤器的位向量,以及存储该组TCAM字的第二部分。 可以使用搜索词搜索TCAM阵列,通过对搜索词执行一个或多个哈希函数操作来产生散列搜索词,并确定散列搜索词的指定位置的比特是否与比特向量的相应位置的比特匹配 存储在TCAM阵列的第一部分中,然后用搜索词搜索TCAM阵列的第二部分。

    Adaptive multi-level checkpointing
    14.
    发明授权

    公开(公告)号:US10769017B2

    公开(公告)日:2020-09-08

    申请号:US15960302

    申请日:2018-04-23

    Abstract: In some examples, with respect to adaptive multi-level checkpointing, a transfer parameter associated with transfer of checkpoint data from a node-local storage to a parallel file system may be ascertained for the checkpoint data stored in the node-local storage. The transfer parameter may be compared to a specified transfer parameter threshold. A determination may be made, based on the comparison of the transfer parameter to the specified transfer parameter threshold, as to whether to transfer the checkpoint data from the node-local storage to the parallel file system.

    Floating point data set compression

    公开(公告)号:US10756756B2

    公开(公告)日:2020-08-25

    申请号:US16131722

    申请日:2018-09-14

    Abstract: Computer-implemented methods, systems, and devices to perform lossless compression of floating point format time-series data are disclosed. A first data value may be obtained in floating point format representative of an initial time-series parameter. For example, an output checkpoint of a computer simulation of a real-world event such as weather prediction or nuclear reaction simulation. A first predicted value may be determined representing the parameter at a first checkpoint time. A second data value may be obtained from the simulation. A prediction error may be calculated. Another predicted value may be generated for a next point in time and may be adjusted by the previously determined prediction error (e.g., to increase accuracy of the subsequent prediction). When a third data value is obtained, the adjusted prediction value may be used to generate a difference (e.g., XOR) for storing in a compressed data store to represent the third data value.

    Memory side accelerator thread assignments

    公开(公告)号:US10324644B2

    公开(公告)日:2019-06-18

    申请号:US15476185

    申请日:2017-03-31

    Abstract: Examples described herein include receiving an operation pipeline for a computing system and building a graph that comprises a model for a number of potential memory side accelerator thread assignments to carry out the operation pipeline. The computing system may comprise at least two memories and a number of memory side accelerators. Each model may comprise a number of steps and at least one step out of the number of steps in each model may comprise a function performed at one memory side accelerator out of the number of memory side accelerators. Examples described herein also include determining a cost of at least one model.

    Modification of multiple lines of cache chunk before invalidation of lines

    公开(公告)号:US10241911B2

    公开(公告)日:2019-03-26

    申请号:US15246136

    申请日:2016-08-24

    Abstract: Examples described herein relate to caching in a system with multiple nodes sharing a globally addressable memory. The globally addressable memory includes multiple windows that each include multiple chunks. Each node of a set of the nodes includes a cache that is associated with one of the windows. One of the nodes includes write access to one of the chunks of the window. The other nodes include read access to the chunk. The node with write access further includes a copy of the chunk in its cache and modifies multiple lines of the chunk copy. After a first line of the chunk copy is modified, a notification is sent to the other nodes that the chunk should be marked dirty. After multiple lines are modified, an invalidation message is sent for each of the modified lines of the set of the nodes.

Patent Agency Ranking