-
公开(公告)号:US08595425B2
公开(公告)日:2013-11-26
申请号:US12567445
申请日:2009-09-25
申请人: Alexander L. Minkin , Steven James Heinrich , RaJeshwaran Selvanesan , Brett W. Coon , Charles McCarver , Anjana Rajendran , Stewart G. Carlton
发明人: Alexander L. Minkin , Steven James Heinrich , RaJeshwaran Selvanesan , Brett W. Coon , Charles McCarver , Anjana Rajendran , Stewart G. Carlton
IPC分类号: G06F12/00
CPC分类号: G06F12/084 , G06F2212/2515 , G06F2212/301 , G06F2212/6012
摘要: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
摘要翻译: 本发明的一个实施例提出了一种用于提供作为中央存储资源的L1高速缓存的技术。 L1缓存为多个客户端提供不同的延迟和带宽要求。 可以重新配置L1高速缓存以创建多个存储空间,使得L1高速缓存可以替代先前架构中的专用缓冲器,高速缓存和FIFO。 配置在L1高速缓存内的“直接映射”存储区可以替代专用缓冲器,FIFO和接口路径,允许L1高速缓存的客户端交换属性和原始数据。 直接映射存储区域可以用作全局寄存器文件。 配置在L1高速缓存内的“本地和全局高速缓存”存储区域可用于支持对多个空间的加载/存储存储器请求。 这些空格包括全局,本地和回调栈(CRS)内存。
-
公开(公告)号:US08266382B1
公开(公告)日:2012-09-11
申请号:US12650214
申请日:2009-12-30
申请人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Anjana Rajendran , Yan Yan Tang
发明人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Anjana Rajendran , Yan Yan Tang
CPC分类号: G06F13/28
摘要: One embodiment of the present invention sets forth a technique for arbitrating requests received from one of the multiple clients of an L1 cache and for providing hints to the client to assist in arbitration. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.
摘要翻译: 本发明的一个实施例提出了一种用于仲裁从L1高速缓存的多个客户机中的一个接收的请求并且向客户端提供帮助以协助仲裁的技术。 L1缓存服务于具有不同延迟和带宽需求的多个客户端,并且可以被重新配置为为执行多个并行线程的客户端提供存储空间,其中每个存储空间具有不同的范围。
-
公开(公告)号:US08335892B1
公开(公告)日:2012-12-18
申请号:US12650226
申请日:2009-12-30
申请人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Anjana Rajendran
发明人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Anjana Rajendran
CPC分类号: G06F12/084 , G06F12/0857
摘要: One embodiment of the present invention sets forth a technique for arbitrating requests received by an L1 cache from multiple clients. The L1 cache outputs bubble requests to a first one of the multiple clients that cause the first one of the multiple clients to insert bubbles into the request stream, where a bubble is the absence of a request. The bubbles allow the L1 cache to grant access to another one of the multiple clients without stalling the first one of the multiple clients. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.
摘要翻译: 本发明的一个实施例提出了一种用于仲裁来自多个客户端的L1高速缓存的请求的技术。 L1缓存将气泡请求输出到多个客户端中的第一个客户端,导致多个客户端中的第一个客户端将气泡插入到请求流中,其中气泡不存在请求。 这些气泡允许L1高速缓存向多个客户机中的另一个客户端授予访问权限,而不会使多个客户端中的第一个客户端停顿。 L1缓存服务于具有不同延迟和带宽需求的多个客户端,并且可以被重新配置为为执行多个并行线程的客户端提供存储空间,其中每个存储空间具有不同的范围。
-
公开(公告)号:US20110078367A1
公开(公告)日:2011-03-31
申请号:US12567445
申请日:2009-09-25
申请人: Alexander L. Minkin , Steven James Heinrich , RaJeshwaren Selvanesan , Brett W. Coon , Charles McCarver , Anjana Rajendran , Stewart G. Carlton
发明人: Alexander L. Minkin , Steven James Heinrich , RaJeshwaren Selvanesan , Brett W. Coon , Charles McCarver , Anjana Rajendran , Stewart G. Carlton
CPC分类号: G06F12/084 , G06F2212/2515 , G06F2212/301 , G06F2212/6012
摘要: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
摘要翻译: 本发明的一个实施例提出了一种用于提供作为中央存储资源的L1高速缓存的技术。 L1缓存为多个客户端提供不同的延迟和带宽要求。 可以重新配置L1高速缓存以创建多个存储空间,使得L1高速缓存可以替代先前架构中的专用缓冲器,高速缓存和FIFO。 配置在L1高速缓存内的“直接映射”存储区可以替代专用缓冲器,FIFO和接口路径,允许L1高速缓存的客户端交换属性和原始数据。 直接映射存储区域可以用作全局寄存器文件。 配置在L1高速缓存内的“本地和全局高速缓存”存储区域可用于支持对多个空间的加载/存储存储器请求。 这些空格包括全局,本地和回调栈(CRS)内存。
-
公开(公告)号:US08266383B1
公开(公告)日:2012-09-11
申请号:US12650189
申请日:2009-12-30
申请人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Ming Y. Siu , Yan Yan Tang , Robert J. Stoll
发明人: Alexander L. Minkin , Steven J. Heinrich , Rajeshwaran Selvanesan , Charles McCarver , Stewart Glenn Carlton , Ming Y. Siu , Yan Yan Tang , Robert J. Stoll
CPC分类号: G06F12/0859 , G06F12/084
摘要: One embodiment of the present invention sets forth a technique for processing cache misses resulting from a request received from one of the multiple clients of an L1 cache. The L1 cache services multiple clients with diverse latency and bandwidth requirements, including at least one client whose requests cannot be stalled. The L1 cache includes storage to buffer pending requests for caches misses. When an entry is available to store a pending request, a request causing a cache miss is accepted. When the data for a read request becomes available, the cache instructs the client to resubmit the read request to receive the data. When an entry is not available to store a pending request, a request causing a cache miss is deferred and the cache provides the client with status information that is used to determine when the request should be resubmitted.
摘要翻译: 本发明的一个实施例提出了一种用于处理由从L1高速缓存的多个客户端之一接收到的请求而产生的高速缓存未命中的技术。 L1缓存服务于具有不同延迟和带宽需求的多个客户端,包括至少一个客户端,其请求不能停止。 L1高速缓存包括缓存未缓存缓存请求的存储。 当条目可用于存储挂起的请求时,接受导致高速缓存未命中的请求。 当读请求的数据变得可用时,缓存指示客户端重新提交读请求以接收数据。 当条目不可用于存储挂起的请求时,导致高速缓存未命中的请求被延迟,并且高速缓存为客户端提供用于确定何时应该重新提交请求的状态信息。
-
公开(公告)号:US20050188009A1
公开(公告)日:2005-08-25
申请号:US10886231
申请日:2004-07-07
申请人: Arthur McKinney , Charles McCarver , Vahid Samiee
发明人: Arthur McKinney , Charles McCarver , Vahid Samiee
IPC分类号: G06F12/08 , G06F13/40 , G06F15/173 , H03K19/003 , H03K19/0185 , G06F15/16 , G06F12/00
CPC分类号: G06F12/0813 , G06F12/0822 , G06F12/0831 , G06F13/4022 , G06F13/4077 , G06F15/173 , H03K19/00361 , H03K19/018521
摘要: A method for maintaining coherent data in a multiprocessor system having a plurality of processors coupled to main memory, where each processor has an internal cache which is externally unreadable outside the processor. The method includes requesting data associated with a memory location in main memory and determining if an external cache coupled to an application specific integrated circuit associated with a second processor contains a reference to the requested data. A snoop cycle is performed on the second processor if the external cache has a reference to the requested data, whereupon a determination is made as to whether the requested data has been modified.
摘要翻译: 一种用于在具有耦合到主存储器的多个处理器的多处理器系统中维持相干数据的方法,其中每个处理器具有在处理器外部在外部不可读的内部高速缓存。 所述方法包括请求与主存储器中的存储器位置相关联的数据,以及确定耦合到与第二处理器相关联的专用集成电路的外部高速缓存是否包含对所请求数据的引用。 如果外部高速缓存具有对所请求的数据的引用,则在第二处理器上执行窥探周期,于是确定所请求的数据是否被修改。
-
-
-
-
-