Cache and associated method with frame buffer managed dirty data pull and high-priority clean mechanism
    1.
    发明授权
    Cache and associated method with frame buffer managed dirty data pull and high-priority clean mechanism 有权
    缓存和相关方法与帧缓冲区管理脏数据拉和高优先级清理机制

    公开(公告)号:US08464001B1

    公开(公告)日:2013-06-11

    申请号:US12331305

    申请日:2008-12-09

    IPC分类号: G06F12/12 G06F13/00

    摘要: Systems and methods are disclosed for managing the number of affirmatively associated cache lines related to the different sets of a data cache unit. A tag look-up unit implements two thresholds, which may be configurable thresholds, to manage the number of cache lines related to a given set that store dirty data or are reserved for in-flight read requests. If the number of affirmatively associated cache lines in a given set is equal to a maximum threshold, the tag look-up unit stalls future requests that require an available cache line within that set to be affirmatively associated. To reduce the number of stalled requests, the tag look-up unit transmits a high priority clean notification to a frame buffer logic when the number of affirmatively associated cache lines in a given set approaches the maximum threshold. The frame buffer logic then processes requests associated with that set preemptively.

    摘要翻译: 公开了用于管理与数据高速缓存单元的不同集合相关的肯定关联的高速缓存行的数量的系统和方法。 标签查找单元实现两个阈值,其可以是可配置的阈值,以管理与存储脏数据的给定集合相关的高速缓存行的数量或者被保留用于飞行读取请求。 如果给定集合中的肯定关联的高速缓存行的数量等于最大阈值,则标签查找单元停止需要在该集合内可用的高速缓存行被肯定地关联的将来的请求。 为了减少停止请求的数量,当给定集中的肯定关联的高速缓存行的数量接近最大阈值时,标签查找单元向帧缓冲器逻辑发送高优先级的清除通知。 帧缓冲器逻辑然后预先处理与该组相关联的请求。

    Class Dependent Clean and Dirty Policy
    2.
    发明申请
    Class Dependent Clean and Dirty Policy 有权
    类依赖的清洁和肮脏的政策

    公开(公告)号:US20130124802A1

    公开(公告)日:2013-05-16

    申请号:US13296119

    申请日:2011-11-14

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0804

    摘要: A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.

    摘要翻译: 公开了一种用于清除中间高速缓存中的脏数据的方法。 当脏数据存储在L2高速缓存中时,包含存储器地址和数据类的脏数据通知由级别2(L2)高速缓存发送到帧缓冲器逻辑。 数据类可能包括首先驱逐,最后驱逐正常和驱逐。 在一个实施例中,属于第一数据类别的数据是具有很少重用潜力的光栅操作数据。 帧缓冲器逻辑使用通知排序器来组织脏数据通知,其中通知分类器中的条目存储DRAM存储体页面编号,具有驻留脏数据的高速缓存行的第一计数和具有居民驱逐器的第一高速缓存行计数 与该DRAM库相关联的脏数据。 当第一计数达到阈值时,帧缓冲器逻辑发送与条目相关联的脏数据。

    Techniques for evicting dirty data from a cache using a notification sorter and count thresholds
    3.
    发明授权
    Techniques for evicting dirty data from a cache using a notification sorter and count thresholds 有权
    使用通知排序器和计数阈值从缓存中排除脏数据的技术

    公开(公告)号:US08949541B2

    公开(公告)日:2015-02-03

    申请号:US13296119

    申请日:2011-11-14

    IPC分类号: G06F12/08 G06F12/12

    CPC分类号: G06F12/0804

    摘要: A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.

    摘要翻译: 公开了一种用于清除中间高速缓存中的脏数据的方法。 当脏数据存储在L2高速缓存中时,包含存储器地址和数据类的脏数据通知由级别2(L2)高速缓存发送到帧缓冲器逻辑。 数据类可能包括首先驱逐,最后驱逐正常和驱逐。 在一个实施例中,属于第一数据类别的数据是具有很少重用潜力的光栅操作数据。 帧缓冲器逻辑使用通知排序器来组织脏数据通知,其中通知分类器中的条目存储DRAM存储体页面编号,具有驻留脏数据的高速缓存行的第一计数和具有居民驱逐器的第一高速缓存行计数 与该DRAM库相关联的脏数据。 当第一计数达到阈值时,帧缓冲器逻辑发送与条目相关联的脏数据。

    Using a data cache array as a DRAM load/store buffer
    4.
    发明授权
    Using a data cache array as a DRAM load/store buffer 有权
    使用数据高速缓存阵列作为DRAM加载/存储缓冲区

    公开(公告)号:US08234478B1

    公开(公告)日:2012-07-31

    申请号:US12256400

    申请日:2008-10-22

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    CPC分类号: G06F12/0895

    摘要: One embodiment of the invention sets forth a mechanism for using the L2 cache as a buffer for data associated with read/write commands that are processed by the frame buffer logic. A tag look-up unit tracks the availability of each cache line in the L2 cache, reserves necessary cache lines for the read/write operations and transmits read commands to the frame buffer logic for processing. A data slice scheduler transmits a dirty data notification to the frame buffer logic when data associated with a write command is stored in an SRAM bank. The data slice scheduler schedules accesses to the SRAM banks and gives priority to accesses requested by the frame buffer logic to store or retrieve data associated with read/write commands. This feature allows cache lines reserved for read/write commands that are processed by the frame buffer logic to be made available at the earliest clock cycle.

    摘要翻译: 本发明的一个实施例提出了一种使用L2高速缓存作为与由帧缓冲器逻辑处理的读/写命令相关联的数据的缓冲器的机制。 标签查找单元跟踪L2高速缓存中每个高速缓存行的可用性,为读/写操作预留必要的高速缓存行,并将读命令发送到帧缓冲器逻辑进行处理。 当与写命令相关联的数据被存储在SRAM存储体中时,数据片调度器将脏数据通知发送到帧缓冲器逻辑。 数据片调度器调度对SRAM组的访问,并且优先级由帧缓冲器逻辑请求的访问来存储或检索与读/写命令相关联的数据。 该功能允许由帧缓冲器逻辑处理的读/写命令保留的高速缓存行在最早的时钟周期内可用。

    Configurable cache occupancy policy
    5.
    发明授权
    Configurable cache occupancy policy 有权
    可配置缓存占用策略

    公开(公告)号:US08131931B1

    公开(公告)日:2012-03-06

    申请号:US12256378

    申请日:2008-10-22

    IPC分类号: G06F12/00

    CPC分类号: G06F12/121

    摘要: One embodiment of the invention is a method for evicting data from an intermediary cache that includes the steps of receiving a command from a client, determining that there is a cache miss relative to the intermediary cache, identifying one or more cache lines within the intermediary cache to store data associated with the command, determining whether any of data residing in the one or more cache lines includes raster operations data or normal data, and causing the data residing in the one or more cache lines to be evicted or stalling the command based, at least in part, on whether the data includes raster operations data or normal data. Advantageously, the method allows a series of cache eviction policies based on how cached data is categorized and the eviction classes of the data. Consequently, more optimized eviction decisions may be made, leading to fewer command stalls and improved performance.

    摘要翻译: 本发明的一个实施例是一种用于从中间缓存中取出数据的方法,包括以下步骤:从客户机接收命令,确定相对于中间缓存存在高速缓存未命中,识别中间缓存内的一个或多个高速缓存行 存储与所述命令相关联的数据,确定驻留在所述一个或多个高速缓存行中的数据中的任何一个是否包括光栅操作数据或正常数据,以及使驻留在所述一个或多个高速缓存行中的数据被驱逐或停止所述命令, 至少部分地关于数据是否包括光栅操作数据或正常数据。 有利地,该方法允许基于缓存数据被分类和数据的逐出类别的一系列缓存驱逐策略。 因此,可以进行更优化的驱逐决定,导致更少的命令停顿和改进的性能。

    System, method and frame buffer logic for evicting dirty data from a cache using counters and data types
    6.
    发明授权
    System, method and frame buffer logic for evicting dirty data from a cache using counters and data types 有权
    使用计数器和数据类型从缓存中排除脏数据的系统,方法和帧缓冲区逻辑

    公开(公告)号:US08060700B1

    公开(公告)日:2011-11-15

    申请号:US12330469

    申请日:2008-12-08

    IPC分类号: G06F13/00 G06F12/12

    摘要: A system and method for cleaning dirty data in an intermediate cache are disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.

    摘要翻译: 公开了一种用于清除中间缓存中的脏数据的系统和方法。 当脏数据存储在L2高速缓存中时,包含存储器地址和数据类的脏数据通知由级别2(L2)高速缓存发送到帧缓冲器逻辑。 数据类包括先驱逐出,最后逐出。 在一个实施例中,属于第一数据类别的数据是具有很少重用潜力的光栅操作数据。 帧缓冲器逻辑使用通知排序器来组织脏数据通知,其中通知分类器中的条目存储DRAM存储体页面编号,具有驻留脏数据的高速缓存行的第一计数和具有居民驱逐器的第一高速缓存行计数 与该DRAM库相关联的脏数据。 当第一个计数达到阈值时,帧缓冲器逻辑发送与条目相关联的脏数据。

    L2 ECC implementation
    7.
    发明授权
    L2 ECC implementation 有权
    L2 ECC实现

    公开(公告)号:US08156404B1

    公开(公告)日:2012-04-10

    申请号:US12202161

    申请日:2008-08-29

    IPC分类号: G06F11/00

    CPC分类号: G06F11/1048

    摘要: One embodiment of the present invention sets forth a method for implementing ECC protection in an on-chip L2 cache. When data is written to or read from an external memory, logic within the L2 cache is configured to generate ECC check bits and store the ECC check bits in the L2 cache in space typically allocated for storing byte enables. As a result, data stored in the L2 cache may be protected against bit errors without incurring the costs of providing additional storage or complex hardware for the ECC check bits.

    摘要翻译: 本发明的一个实施例提出了一种用于在片上L2高速缓存中实现ECC保护的方法。 当数据被写入或从外部存储器读取时,L2高速缓存中的逻辑被配置为生成ECC校验位,并且将ECC校验位存储在通常被分配用于存储字节使能的空间中的L2高速缓存中。 结果,可以保护存储在L2高速缓存中的数据免于位错误,而不会产生为ECC校验位提供附加存储或复杂硬件的成本。

    Storing dynamically sized buffers within a cache
    9.
    发明授权
    Storing dynamically sized buffers within a cache 有权
    在缓存中存储动态大小的缓冲区

    公开(公告)号:US08504773B1

    公开(公告)日:2013-08-06

    申请号:US12326764

    申请日:2008-12-02

    CPC分类号: G06F15/167

    摘要: A system and method for buffering intermediate data in a processing pipeline architecture stores the intermediate data in a shared cache that is coupled between one or more pipeline processing units and an external memory. The shared cache provides storage that is used by multiple pipeline processing units. The storage capacity of the shared cache is dynamically allocated to the different pipeline processing units as needed, to avoid stalling the upstream units, thereby improving overall system throughput.

    摘要翻译: 用于缓冲处理流水线架构中的中间数据的系统和方法将中间数据存储在耦合在一个或多个流水线处理单元和外部存储器之间的共享高速缓存中。 共享缓存提供多个流水线处理单元使用的存储。 共享缓存的存储容量根据需要动态分配给不同的流水线处理单元,以避免停止上游单元,从而提高整体系统吞吐量。

    Method and system for converting data formats using a shared cache coupled between clients and an external memory
    10.
    发明授权
    Method and system for converting data formats using a shared cache coupled between clients and an external memory 有权
    使用在客户机和外部存储器之间耦合的共享缓存来转换数据格式的方法和系统

    公开(公告)号:US08271734B1

    公开(公告)日:2012-09-18

    申请号:US12329345

    申请日:2008-12-05

    IPC分类号: G06F13/00

    CPC分类号: G06F12/084 G06F2212/401

    摘要: A system and method for converting data from one format to another in a processing pipeline architecture. Data is stored in a shared cache that is coupled between one or more clients and an external memory. The shared cache provides storage that is used by multiple clients rather than being dedicated to separately convert the data format for each client. Each client may interface with the memory using a different format, such as a compressed data format. Data is converted to the format expected by the particular client as it is read from the cache and output to the client during a read operation. Bytes of a cache line may be remapped to bytes of an unpack register for output to a naïve client, which may be configured to perform texture mapping operations. Data is converted from the client format to the memory format as it is stored into the cache during a write operation.

    摘要翻译: 一种用于在处理流水线架构中将数据从一种格式转换为另一种格式的系统和方法。 数据存储在耦合在一个或多个客户端和外部存储器之间的共享高速缓存中。 共享缓存提供由多个客户端使用的存储,而不是专用于分别转换每个客户端的数据格式。 每个客户端可以使用不同的格式(例如压缩数据格式)与存储器接口。 在读取操作期间,数据将从特定客户端转换为预期的格式,并从缓存中读取并输出到客户端。 高速缓存行的字节可以重新映射到解包寄存器的字节,以输出到初始客户机,其可以被配置为执行纹理映射操作。 数据从客户端格式转换为存储格式,因为它在写操作期间存储到高速缓存中。