Prescient cache management
    2.
    发明授权
    Prescient cache management 有权
    预备缓存管理

    公开(公告)号:US07616209B1

    公开(公告)日:2009-11-10

    申请号:US11454230

    申请日:2006-06-16

    IPC分类号: G09G5/36 G06F13/00 G06F13/28

    摘要: Prescient cache management methods and systems are disclosed. In one embodiment, within a pre-raster engine operations stage in a graphics rendering pipeline, tile entries are stored in a buffer. Each of these tile entries is related a transaction request that enters the pre-raster engine operations stage and has a screen coordinates field and a conflict field. If this buffer includes a first tile entry, which is related to a first transaction request associated with a first tile, and a second tile entry, which is related to a second transaction request that enters the pre-raster engine operations stage after the first transaction request and is also associated with the first tile, the conflict field of the first tile entry is updated with a conflict type that reflects a number of tile entries between the first tile entry and the second tile entry.

    摘要翻译: 公开了高级缓存管理方法和系统。 在一个实施例中,在图形渲染流水线的光栅前引擎操作阶段内,瓦片条目被存储在缓冲器中。 这些瓦片条目中的每一个都与进入光栅前引擎操作阶段并具有屏幕坐标字段和冲突字段的事务请求相关联。 如果该缓冲器包括第一瓦片条目,其与与第一瓦片相关联的第一事务请求和第二瓦片条目相关,第二瓦片条目与在第一交易之后进入前光栅引擎操作阶段的第二事务请求相关 请求并且还与第一瓦片相关联,用反映第一瓦片条目和第二瓦片条目之间的多个瓦片条目的冲突类型来更新第一瓦片条目的冲突字段。

    Efficient line and page organization for compression status bit caching
    4.
    发明授权
    Efficient line and page organization for compression status bit caching 有权
    用于压缩状态位缓存的高效线和页组织

    公开(公告)号:US08627041B2

    公开(公告)日:2014-01-07

    申请号:US12901452

    申请日:2010-10-08

    IPC分类号: G06F12/00 G06F13/00

    摘要: One embodiment of the present invention sets forth a technique for performing a memory access request to compressed data within a virtually mapped memory system comprising an arbitrary number of partitions. A virtual address is mapped to a linear physical address, specified by a page table entry (PTE). The PTE is configured to store compression attributes, which are used to locate compression status for a corresponding physical memory page within a compression status bit cache. The compression status bit cache operates in conjunction with a compression status bit backing store. If compression status is available from the compression status bit cache, then the memory access request proceeds using the compression status. If the compression status bit cache misses, then the miss triggers a fill operation from the backing store. After the fill completes, memory access proceeds using the newly filled compression status information.

    摘要翻译: 本发明的一个实施例提出了一种对包括任意数量的分区的虚拟映射的存储器系统中的压缩数据执行存储器访问请求的技术。 虚拟地址被映射到由页表项(PTE)指定的线性物理地址。 PTE被配置为存储压缩属性,其用于定位压缩状态位缓存内的对应物理存储器页的压缩状态。 压缩状态位缓存与压缩状态位后备存储一起操作。 如果从压缩状态位缓存获得压缩状态,则存储器访问请求使用压缩状态进行。 如果压缩状态位缓存未命中,则错误触发后备存储器的填充操作。 填充完成后,使用新填充的压缩状态信息进行内存访问。

    Compression status bit cache with deterministic isochronous latency
    6.
    发明授权
    Compression status bit cache with deterministic isochronous latency 有权
    具有确定性同步延迟的压缩状态位缓存

    公开(公告)号:US08595437B1

    公开(公告)日:2013-11-26

    申请号:US12276147

    申请日:2008-11-21

    IPC分类号: G06F12/08

    摘要: One embodiment of the present invention sets forth a compression status bit cache with deterministic latency for isochronous memory clients of compressed memory. The compression status bit cache improves overall memory system performance by providing on-chip availability of compression status bits that are used to size and interpret a memory access request to compressed memory. To avoid non-deterministic latency when an isochronous memory client accesses the compression status bit cache, two design features are employed. The first design feature involves bypassing any intermediate cache when the compression status bit cache reads a new cache line in response to a cache read miss, thereby eliminating additional, potentially non-deterministic latencies outside the scope of the compression status bit cache. The second design feature involves maintaining a minimum pool of clean cache lines by opportunistically writing back dirty cache lines and, optionally, temporarily blocking non-critical requests that would dirty already clean cache lines. With clean cache lines available to be overwritten quickly, the compression status bit cache avoids incurring additional miss write back latencies.

    摘要翻译: 本发明的一个实施例针对压缩存储器的同步存储器客户端提出了具有确定性延迟的压缩状态位缓存。 压缩状态位缓存通过提供压缩状态位的片上可用性来提高整体存储器系统性能,压缩状态位用于对存储器访问请求进行大小和解释,并将其解释为压缩存储器。 为了避免同步存储器客户端访问压缩状态位缓存时的非确定性延迟,采用了两个设计特征。 第一个设计功能涉及当压缩状态位缓存读取新的高速缓存行以响应高速缓存读取未命中时绕过任何中间缓存,从而消除在压缩状态位缓存范围之外的额外的潜在的非确定性延迟。 第二个设计功能包括通过机会地写回脏的高速缓存线,以及可选地临时阻止将已经清除高速缓存行的非关键请求,来保持最小的干净的高速缓存行池。 使用干净的缓存线可以快速覆盖,压缩状态位缓存避免了额外的错误回写延迟。

    Cache-based control of atomic operations in conjunction with an external ALU block
    7.
    发明授权
    Cache-based control of atomic operations in conjunction with an external ALU block 有权
    结合外部ALU块的基于缓存的原子操作控制

    公开(公告)号:US08108610B1

    公开(公告)日:2012-01-31

    申请号:US12255595

    申请日:2008-10-21

    IPC分类号: G06F12/16

    摘要: One embodiment of the invention sets forth a mechanism for efficiently processing atomic operations transmitted from multiple general processing clusters to an L2 cache. A tag look-up unit tracks the availability of each cache line in the L2 cache, reserves the necessary cache lines for the atomic operations and transmits the atomic operations to an ALU for processing. The tag look-up unit also increments a reference counter associated with a reserved cache line each time an atomic operation associated with that cache line is received. This feature allows multiple atomic operations associated with the same cache line to be pipelined to the ALU. A ROP unit that includes the ALU may request additional data necessary to process an atomic operation from the L2 cache. Result data is stored in the L2 cache and may also be returned to the general processing clusters.

    摘要翻译: 本发明的一个实施例提出了一种用于有效地处理从多个通用处理群集发送到L2高速缓存的原子操作的机制。 标签查找单元跟踪L2高速缓存中每个高速缓存行的可用性,为原子操作预留必要的高速缓存行,并将原子操作发送到ALU进行处理。 每当与该高速缓存行相关联的原子操作被接收时,标签查找单元也增加与保留高速缓存行相关联的参考计数器。 此功能允许将与同一高速缓存线相关联的多个原子操作流水线连接到ALU。 包括ALU的ROP单元可以请求从L2高速缓存处理原子操作所需的附加数据。 结果数据存储在L2缓存中,也可以返回到通用处理集群。

    Pivoting joint infusion assembly
    9.
    发明授权

    公开(公告)号:US06579267B2

    公开(公告)日:2003-06-17

    申请号:US09896149

    申请日:2001-06-29

    IPC分类号: A61M532

    摘要: System for the subcutaneous delivery into the body of a patient of a fluid from a remote vessel. The system includes a main assembly and placement member with a needle. A delivery tube for carrying the fluid is attached at a near end to the remote reservoir or vessel. At removed end, the delivery tube has a needle for engagement with the main assembly. The main assembly includes a rotating member that when the rotating is perpendicular to the main assembly, it will accept the handle and needle for emplacement of the body onto a patient. After the handle and needle are removed, the delivery tube can be attached to the rotating member which can then be rotated down to a position along to and adjacent the skin of the patient. This provides for a flush mounted infusion device.

    Cache and associated method with frame buffer managed dirty data pull and high-priority clean mechanism
    10.
    发明授权
    Cache and associated method with frame buffer managed dirty data pull and high-priority clean mechanism 有权
    缓存和相关方法与帧缓冲区管理脏数据拉和高优先级清理机制

    公开(公告)号:US08464001B1

    公开(公告)日:2013-06-11

    申请号:US12331305

    申请日:2008-12-09

    IPC分类号: G06F12/12 G06F13/00

    摘要: Systems and methods are disclosed for managing the number of affirmatively associated cache lines related to the different sets of a data cache unit. A tag look-up unit implements two thresholds, which may be configurable thresholds, to manage the number of cache lines related to a given set that store dirty data or are reserved for in-flight read requests. If the number of affirmatively associated cache lines in a given set is equal to a maximum threshold, the tag look-up unit stalls future requests that require an available cache line within that set to be affirmatively associated. To reduce the number of stalled requests, the tag look-up unit transmits a high priority clean notification to a frame buffer logic when the number of affirmatively associated cache lines in a given set approaches the maximum threshold. The frame buffer logic then processes requests associated with that set preemptively.

    摘要翻译: 公开了用于管理与数据高速缓存单元的不同集合相关的肯定关联的高速缓存行的数量的系统和方法。 标签查找单元实现两个阈值,其可以是可配置的阈值,以管理与存储脏数据的给定集合相关的高速缓存行的数量或者被保留用于飞行读取请求。 如果给定集合中的肯定关联的高速缓存行的数量等于最大阈值,则标签查找单元停止需要在该集合内可用的高速缓存行被肯定地关联的将来的请求。 为了减少停止请求的数量,当给定集中的肯定关联的高速缓存行的数量接近最大阈值时,标签查找单元向帧缓冲器逻辑发送高优先级的清除通知。 帧缓冲器逻辑然后预先处理与该组相关联的请求。