MECHANISM FOR EFFECTIVELY CACHING STREAMING AND NON-STREAMING DATA PATTERNS
    41.
    发明申请
    MECHANISM FOR EFFECTIVELY CACHING STREAMING AND NON-STREAMING DATA PATTERNS 有权
    有效的高速缓存和非流动数据模式的机制

    公开(公告)号:US20110099333A1

    公开(公告)日:2011-04-28

    申请号:US12908183

    申请日:2010-10-20

    CPC classification number: G06F12/127 G06F12/0862 G06F12/124 G06F2212/6028

    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.

    Abstract translation: 本文描述了用于有效地高速缓存流和非流数据的方法和装置。 诸如编译器的软件识别最后使用的流指令/操作,这些指令/操作是用于访问多个指令或一定时间量的最后指令/操作来访问流数据。 作为执行对最后使用指令/操作的高速缓存线的访问的结果,将高速缓存行更新为不再需要的流数据(SDN)状态。 当控制逻辑要确定要替换的高速缓存行时,修改的最近最少使用(LRU)算法被偏置以首先选择SDN状态行来替换不再需要的流数据。

    MECHANISM FOR EFFECTIVELY HANDLING TEXTURE SAMPLING
    42.
    发明申请
    MECHANISM FOR EFFECTIVELY HANDLING TEXTURE SAMPLING 有权
    有效处理纹理抽样的机制

    公开(公告)号:US20090174721A1

    公开(公告)日:2009-07-09

    申请号:US11967408

    申请日:2007-12-31

    Applicant: Eric Sprangle

    Inventor: Eric Sprangle

    Abstract: A method and apparatus for efficiently handling texture sampling is described herein. A compiler or other software is capable of breaking a texture sampling operation for a pixel into a pre-fetch operation and a use operation. A processing element, in response to executing the pre-fetch operation, delegates computation of the texture sample of the pixel to a hardware texture sample unit. In parallel to the hardware texture sample unit performing a texture sample for the pixel and providing the result, i.e. a textured pixel (texel), to a destination address, the processing element is capable of executing other independent code. After an amount of time, the processing element executes the use operation, such as a load operation to load the texel from the destination address.

    Abstract translation: 本文描述了一种用于有效处理纹理采样的方法和装置。 编译器或其他软件能够将像素的纹理采样操作分解成预取操作和使用操作。 响应于执行预取操作,处理元件将像素的纹理样本的计算委托给硬件纹理采样单元。 与硬件纹理采样单元平行地执行像素的纹理样本并将结果(即,纹理像素(纹素))提供给目的地址,处理元件能够执行其他独立代码。 在一段时间之后,处理元件执行使用操作,例如从目的地地址加载纹素的加载操作。

    Method and apparatus for prefetching data to a lower level cache memory
    43.
    发明授权
    Method and apparatus for prefetching data to a lower level cache memory 有权
    用于将数据预取到较低级高速缓冲存储器的方法和装置

    公开(公告)号:US07383418B2

    公开(公告)日:2008-06-03

    申请号:US10933188

    申请日:2004-09-01

    CPC classification number: G06F12/0897 G06F12/0862 G06F2212/6024

    Abstract: A prefetching scheme to detect when a load misses the lower level cache and hits the next level cache. Consequently, the prefetching scheme utilizes the previous information for the cache miss to the lower level cache and hit to the next higher level of cache memory that may result in initiating a sidedoor prefetch load for fetching the previous or next cache line into the lower level cache. In order to generate an address for the sidedoor prefetch, a history of cache access is maintained in a queue.

    Abstract translation: 一种预取方案,用于检测何时加载错过较低级别的缓存,并触发下一级缓存。 因此,预取方案利用先前的信息将高速缓存未命中用于较低级别的高速缓存并且命中到下一个较高级别的高速缓冲存储器,这可能导致发起侧面预取负载,以将先前或下一个高速缓存行提取到下一级高速缓存 。 为了生成二进制预取的地址,在队列中保持高速缓存访​​问的历史。

    Mechanism to increase data compression in a cache
    44.
    发明申请
    Mechanism to increase data compression in a cache 审中-公开
    增加缓存中数据压缩的机制

    公开(公告)号:US20050071566A1

    公开(公告)日:2005-03-31

    申请号:US10676478

    申请日:2003-09-30

    CPC classification number: G06F12/0886 G06F2212/401

    Abstract: According to one embodiment a computer system is disclosed. The computer system includes a central processing unit (CPU) and a cache memory coupled to the CPU. The cache memory includes a main cache having plurality of compressible cache lines to store additional data, and a plurality of storage pools to hold a segment of the additional data for one or more of the plurality of cache lines that are to be compressed.

    Abstract translation: 根据一个实施例,公开了一种计算机系统。 计算机系统包括中央处理单元(CPU)和耦合到CPU的高速缓冲存储器。 高速缓冲存储器包括具有多个可压缩高速缓存行以存储附加数据的主高速缓存,以及多个存储池,用于保存要被压缩的多个高速缓存行中的一个或多个的附加数据的段。

Patent Agency Ranking