ARCHITECTURE AND ALGORITHMS FOR DATA COMPRESSION

    公开(公告)号:US20170351429A1

    公开(公告)日:2017-12-07

    申请号:US15176082

    申请日:2016-06-07

    CPC classification number: G06T1/20 H03M7/40 H03M7/60

    Abstract: A system architecture conserves memory bandwidth by including compression utility to process data transfers from the cache into external memory. The cache decompresses transfers from external memory and transfers full format data to naive clients that lack decompression capability and directly transfers compressed data to savvy clients that include decompression capability. An improved compression algorithm includes software that computes the difference between the current data word and each of a number of prior data words. Software selects the prior data word with the smallest difference as the nearest match and encodes the bit width of the difference to this data word. Software then encodes the difference between the current stride and the closest previous stride. Software combines the stride, bit width, and difference to yield final encoded data word. Software may encode the stride of one data word as a value relative to the stride of a previous data word.

    COMPRESSION STATUS BIT CACHE AND BACKING STORE
    2.
    发明申请
    COMPRESSION STATUS BIT CACHE AND BACKING STORE 审中-公开
    压缩状态位高速缓存和备份存储

    公开(公告)号:US20140237189A1

    公开(公告)日:2014-08-21

    申请号:US14157159

    申请日:2014-01-16

    Abstract: One embodiment of the present invention sets forth a technique for increasing available storage space within compressed blocks of memory attached to data processing chips, without requiring a proportional increase in on-chip compression status bits. A compression status bit cache provides on-chip availability of compression status bits used to determine how many bits are needed to access a potentially compressed block of memory. A backing store residing in a reserved region of attached memory provides storage for a complete set of compression status bits used to represent compression status of an arbitrarily large number of blocks residing in attached memory. Physical address remapping (“swizzling”) used to distribute memory access patterns over a plurality of physical memory devices is partially replicated by the compression status bit cache to efficiently integrate allocation and access of the backing store data with other user data.

    Abstract translation: 本发明的一个实施例提出了一种用于在附加到数据处理芯片的存储器的压缩块内增加可用存储空间的技术,而不需要片上压缩状态位的比例增加。 压缩状态位缓存提供压缩状态位的片上可用性,用于确定需要多少位以访问潜在的压缩的存储器块。 驻留在附加存储器的保留区域中的备份存储器为用于表示驻留在附加存储器中的任意大量块的压缩状态的一整套压缩状态位提供存储。 用于在多个物理存储器设备上分配存储器访问模式的物理地址重映射(“swizzling”)被压缩状态位高速缓存部分地复制,以有效地整合后备存储数据与其他用户数据的分配和访问。

    REDUCING MEMORY TRAFFIC IN DRAM ECC MODE
    3.
    发明申请
    REDUCING MEMORY TRAFFIC IN DRAM ECC MODE 有权
    降低DRAM ECC模式中的内存流量

    公开(公告)号:US20150012705A1

    公开(公告)日:2015-01-08

    申请号:US13935414

    申请日:2013-07-03

    Abstract: A method for managing memory traffic includes causing first data to be written to a data cache memory, where a first write request comprises a partial write and writes the first data to a first portion of the data cache memory, and further includes tracking the number of partial writes in the data cache memory. The method further includes issuing a fill request for one or more partial writes in the data cache memory if the number of partial writes in the data cache memory is greater than a predetermined first threshold.

    Abstract translation: 一种用于管理存储器流量的方法包括使第一数据被写入数据高速缓冲存储器,其中第一写入请求包括部分写入,并将第一数据写入数据高速缓冲存储器的第一部分,并且还包括跟踪数据高速缓冲存储器的数量 部分写入数据高速缓冲存储器。 该方法还包括如果数据高速缓冲存储器中的部分写入数大于预定的第一阈值,则向数据高速缓冲存储器发出一个或多个部分写入的填充请求。

    Organizing Memory to Optimize Memory Accesses of Compressed Data

    公开(公告)号:US20170123977A1

    公开(公告)日:2017-05-04

    申请号:US14925920

    申请日:2015-10-28

    Abstract: In one embodiment of the present invention a cache unit organizes data stored in an attached memory to optimize accesses to compressed data. In operation, the cache unit introduces a layer of indirection between a physical address associated with a memory access request and groups of blocks in the attached memory. The layer of indirection—virtual tiles—enables the cache unit to selectively store compressed data that would conventionally be stored in separate physical tiles included in a group of blocks in a single physical tile. Because the cache unit stores compressed data associated with multiple physical tiles in a single physical tile and, more specifically, in adjacent locations within the single physical tile, the cache unit coalesces the compressed data into contiguous blocks. Subsequently, upon performing a read operation, the cache unit may retrieve the compressed data conventionally associated with separate physical tiles in a single read operation.

    CONTROL MECHANISM FOR FINE-TUNED CACHE TO BACKING-STORE SYNCHRONIZATION
    6.
    发明申请
    CONTROL MECHANISM FOR FINE-TUNED CACHE TO BACKING-STORE SYNCHRONIZATION 有权
    用于微调缓存的控制机制用于备份存储同步

    公开(公告)号:US20140122809A1

    公开(公告)日:2014-05-01

    申请号:US13664387

    申请日:2012-10-30

    Abstract: One embodiment of the present invention sets forth a technique for processing commands received by an intermediary cache from one or more clients. The technique involves receiving a first write command from an arbiter unit, where the first write command specifies a first memory address, determining that a first cache line related to a set of cache lines included in the intermediary cache is associated with the first memory address, causing data associated with the first write command to be written into the first cache line, and marking the first cache line as dirty. The technique further involves determining whether a total number of cache lines marked as dirty in the set of cache lines is less than, equal to, or greater than a first threshold value, and: not transmitting a dirty data notification to the frame buffer logic when the total number is less than the threshold value, or transmitting a dirty data notification to the frame buffer logic when the total number is equal to or greater than the first threshold value.

    Abstract translation: 本发明的一个实施例提出了一种用于处理来自一个或多个客户端的中间缓存所接收的命令的技术。 该技术涉及从仲裁器单元接收第一写入命令,其中第一写入命令指定第一存储器地址,确定与中间缓存中包括的一组高速缓存行相关联的第一高速缓存行与第一存储器地址相关联, 使得与第一写命令相关联的数据被写入第一高速缓存行,并将第一高速缓存行标记为脏。 该技术还涉及确定在该组高速缓存行中标记为脏的总数量是否小于,等于或大于第一阈值,以及:不将脏数据通知发送到帧缓冲器逻辑,当 总数小于阈值,或者当总数等于或大于第一阈值时,将脏数据通知发送到帧缓冲器逻辑。

Patent Agency Ranking