Cache arbitration between multiple clients
    2.
    发明授权
    Cache arbitration between multiple clients 有权
    缓存多个客户端之间的仲裁

    公开(公告)号:US08335892B1

    公开(公告)日:2012-12-18

    申请号:US12650226

    申请日:2009-12-30

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    CPC分类号: G06F12/084 G06F12/0857

    摘要: One embodiment of the present invention sets forth a technique for arbitrating requests received by an L1 cache from multiple clients. The L1 cache outputs bubble requests to a first one of the multiple clients that cause the first one of the multiple clients to insert bubbles into the request stream, where a bubble is the absence of a request. The bubbles allow the L1 cache to grant access to another one of the multiple clients without stalling the first one of the multiple clients. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.

    摘要翻译: 本发明的一个实施例提出了一种用于仲裁来自多个客户端的L1高速缓存的请求的技术。 L1缓存将气泡请求输出到多个客户端中的第一个客户端,导致多个客户端中的第一个客户端将气泡插入到请求流中,其中气泡不存在请求。 这些气泡允许L1高速缓存向多个客户机中的另一个客户端授予访问权限,而不会使多个客户端中的第一个客户端停顿。 L1缓存服务于具有不同延迟和带宽需求的多个客户端,并且可以被重新配置为为执行多个并行线程的客户端提供存储空间,其中每个存储空间具有不同的范围。

    Cache miss processing using a defer/replay mechanism
    3.
    发明授权
    Cache miss processing using a defer/replay mechanism 有权
    使用延迟/重播机制的缓存未命中处理

    公开(公告)号:US08266383B1

    公开(公告)日:2012-09-11

    申请号:US12650189

    申请日:2009-12-30

    CPC分类号: G06F12/0859 G06F12/084

    摘要: One embodiment of the present invention sets forth a technique for processing cache misses resulting from a request received from one of the multiple clients of an L1 cache. The L1 cache services multiple clients with diverse latency and bandwidth requirements, including at least one client whose requests cannot be stalled. The L1 cache includes storage to buffer pending requests for caches misses. When an entry is available to store a pending request, a request causing a cache miss is accepted. When the data for a read request becomes available, the cache instructs the client to resubmit the read request to receive the data. When an entry is not available to store a pending request, a request causing a cache miss is deferred and the cache provides the client with status information that is used to determine when the request should be resubmitted.

    摘要翻译: 本发明的一个实施例提出了一种用于处理由从L1高速缓存的多个客户端之一接收到的请求而产生的高速缓存未命中的技术。 L1缓存服务于具有不同延迟和带宽需求的多个客户端,包括至少一个客户端,其请求不能停止。 L1高速缓存包括缓存未缓存缓存请求的存储。 当条目可用于存储挂起的请求时,接受导致高速缓存未命中的请求。 当读请求的数据变得可用时,缓存指示客户端重新提交读请求以接收数据。 当条目不可用于存储挂起的请求时,导致高速缓存未命中的请求被延迟,并且高速缓存为客户端提供用于确定何时应该重新提交请求的状态信息。

    Configurable cache for multiple clients
    6.
    发明授权
    Configurable cache for multiple clients 有权
    多个客户端的可配置缓存

    公开(公告)号:US08595425B2

    公开(公告)日:2013-11-26

    申请号:US12567445

    申请日:2009-09-25

    IPC分类号: G06F12/00

    摘要: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.

    摘要翻译: 本发明的一个实施例提出了一种用于提供作为中央存储资源的L1高速缓存的技术。 L1缓存为多个客户端提供不同的延迟和带宽要求。 可以重新配置L1高速缓存以创建多个存储空间,使得L1高速缓存可以替代先前架构中的专用缓冲器,高速缓存和FIFO。 配置在L1高速缓存内的“直接映射”存储区可以替代专用缓冲器,FIFO和接口路径,允许L1高速缓存的客户端交换属性和原始数据。 直接映射存储区域可以用作全局寄存器文件。 配置在L1高速缓存内的“本地和全局高速缓存”存储区域可用于支持对多个空间的加载/存储存储器请求。 这些空格包括全局,本地和回调栈(CRS)内存。

    Sharing data crossbar for reads and writes in a data cache
    7.
    发明授权
    Sharing data crossbar for reads and writes in a data cache 有权
    在数据高速缓存中共享用于读写数据的交叉开关

    公开(公告)号:US09286256B2

    公开(公告)日:2016-03-15

    申请号:US12892862

    申请日:2010-09-28

    CPC分类号: G06F13/4022 G06F13/4031

    摘要: The invention sets forth an L1 cache architecture that includes a crossbar unit configured to transmit data associated with both read data requests and write data requests. Data associated with read data requests is retrieved from a cache memory and transmitted to the client subsystems. Similarly, data associated with write data requests is transmitted from the client subsystems to the cache memory. To allow for the transmission of both read and write data on the crossbar unit, an arbiter is configured to schedule the crossbar unit transmissions as well and arbitrate between data requests received from the client subsystems.

    摘要翻译: 本发明提出了一种L1缓存架构,其包括被配置为发送与读取数据请求和写入数据请求相关联的数据的交叉单元。 与读取数据请求相关联的数据从高速缓冲存储器检索并发送到客户机子系统。 类似地,与写数据请求相关联的数据从客户端子系统发送到高速缓冲存储器。 为了允许在交叉开关单元上传输读取和写入数据,仲裁器被配置为调度交叉单元传输以及在从客户端子系统接收的数据请求之间进行仲裁。

    Sharing Data Crossbar for Reads and Writes in a Data Cache
    8.
    发明申请
    Sharing Data Crossbar for Reads and Writes in a Data Cache 有权
    共享数据交叉开关用于在数据缓存中进行读写

    公开(公告)号:US20110082961A1

    公开(公告)日:2011-04-07

    申请号:US12892862

    申请日:2010-09-28

    IPC分类号: G06F13/36 G06F13/00

    CPC分类号: G06F13/4022 G06F13/4031

    摘要: The invention sets forth an L1 cache architecture that includes a crossbar unit configured to transmit data associated with both read data requests and write data requests. Data associated with read data requests is retrieved from a cache memory and transmitted to the client subsystems. Similarly, data associated with write data requests is transmitted from the client subsystems to the cache memory. To allow for the transmission of both read and write data on the crossbar unit, an arbiter is configured to schedule the crossbar unit transmissions as well and arbitrate between data requests received from the client subsystems.

    摘要翻译: 本发明提出了一种L1缓存架构,其包括被配置为发送与读取数据请求和写入数据请求相关联的数据的交叉单元。 与读取数据请求相关联的数据从高速缓冲存储器检索并发送到客户机子系统。 类似地,与写数据请求相关联的数据从客户端子系统发送到高速缓冲存储器。 为了允许在交叉开关单元上传输读取和写入数据,仲裁器被配置为调度交叉单元传输以及在从客户端子系统接收的数据请求之间进行仲裁。

    CONFIGURABLE CACHE FOR MULTIPLE CLIENTS
    9.
    发明申请
    CONFIGURABLE CACHE FOR MULTIPLE CLIENTS 有权
    多个客户端的可配置缓存

    公开(公告)号:US20110078367A1

    公开(公告)日:2011-03-31

    申请号:US12567445

    申请日:2009-09-25

    IPC分类号: G06F12/02 G06F12/00 G06F12/08

    摘要: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.

    摘要翻译: 本发明的一个实施例提出了一种用于提供作为中央存储资源的L1高速缓存的技术。 L1缓存为多个客户端提供不同的延迟和带宽要求。 可以重新配置L1高速缓存以创建多个存储空间,使得L1高速缓存可以替代先前架构中的专用缓冲器,高速缓存和FIFO。 配置在L1高速缓存内的“直接映射”存储区可以替代专用缓冲器,FIFO和接口路径,允许L1高速缓存的客户端交换属性和原始数据。 直接映射存储区域可以用作全局寄存器文件。 配置在L1高速缓存内的“本地和全局高速缓存”存储区域可用于支持对多个空间的加载/存储存储器请求。 这些空格包括全局,本地和回调栈(CRS)内存。