Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response
    1.
    发明申请
    Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response 失效
    链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行进行顺序同步访问

    公开(公告)号:US20070083717A1

    公开(公告)日:2007-04-12

    申请号:US11245313

    申请日:2005-10-06

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831 G06F12/0822

    摘要: A method and data processing system for sequentially coupling successive, homogenous processor requests for a cache line in a chain before the data is received in the cache of a first processor within the chain. Chained intermediate coherency states are assigned to track the chain of processor requests and subsequent access permission provided, prior to receipt of the data at the first processor starting the chain. The chained intermediate coherency state assigned identifies the processor operation and a directional identifier identifies the processor to which the cache line is to be forwarded. When the data is received at the cache of the first processor within the chain, the first processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chain is immediately stopped when a non-homogenous operation is snooped by the last-in-chain processor.

    摘要翻译: 一种方法和数据处理系统,用于在数据在链中的第一处理器的高速缓存中接收之前,将链接中的高速缓存行的连续的均匀处理器请求顺序耦合。 分配链接的中间一致性状态,以便在启动链路的第一个处理器接收到数据之前跟踪处理器请求链和后续访问权限。 所分配的链接中间一致性状态标识处理器操作,并且方向标识符标识要向其转发高速缓存行的处理器。 当在链中的第一处理器的高速缓存处接收数据时,第一处理器完成其数据处理(或与数据)的操作,然后将数据转发到链中的下一个处理器。 当最后一个链接处理器窥探非均匀操作时,链条立即停止。

    Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response
    2.
    发明申请
    Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response 有权
    链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行的顺序非均匀访问

    公开(公告)号:US20070083716A1

    公开(公告)日:2007-04-12

    申请号:US11245312

    申请日:2005-10-06

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831

    摘要: A method for sequentially coupling successive processor requests for a cache line before the data is received in the cache of a first coupled processor. Both homogenous and non-homogenous operations are chained to each other, and the coherency protocol includes several new intermediate coherency responses associated with the chained states. Chained coherency states are assigned to track the chain of processor requests and the grant of access permission prior to receipt of the data at the first processor. The chained coherency states also identify the address of the receiving processor. When data is received at the cache of the first processor within the chain, the processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chained coherency protocol frees up address bus bandwidth by reducing the number of retries.

    摘要翻译: 一种用于在数据在第一耦合处理器的高速缓存中接收数据之前顺序耦合高速缓存行的连续处理器请求的方法。 同质和非均匀的操作彼此链接,并且一致性协议包括与链接状态相关联的几个新的中间一致性响应。 分配链接一致性状态以在第一处理器接收到数据之前跟踪处理器请求链和授予访问权限。 链接的一致性状态还标识接收处理器的地址。 当在链中的第一处理器的高速缓存处接收到数据时,处理器完成其对(或)数据的操作,然后将数据转发到链中的下一个处理器。 链接的一致性协议通过减少重试次数来释放地址总线带宽。

    ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR
    3.
    发明申请
    ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR 有权
    在微处理器中注入高速缓存存储器的辅助螺纹

    公开(公告)号:US20120198459A1

    公开(公告)日:2012-08-02

    申请号:US13434423

    申请日:2012-03-29

    IPC分类号: G06F9/46 G06F12/08

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Assist thread for injecting cache memory in a microprocessor
    4.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08230422B2

    公开(公告)日:2012-07-24

    申请号:US11034546

    申请日:2005-01-13

    IPC分类号: G06F9/46 G06F9/40 G06F13/28

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response
    5.
    发明授权
    Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response 有权
    链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行的顺序非均匀访问

    公开(公告)号:US07409504B2

    公开(公告)日:2008-08-05

    申请号:US11245312

    申请日:2005-10-06

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A method for sequentially coupling successive processor requests for a cache line before the data is received in the cache of a first coupled processor. Both homogenous and non-homogenous operations are chained to each other, and the coherency protocol includes several new intermediate coherency responses associated with the chained states. Chained coherency states are assigned to track the chain of processor requests and the grant of access permission prior to receipt of the data at the first processor. The chained coherency states also identify the address of the receiving processor. When data is received at the cache of the first processor within the chain, the processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chained coherency protocol frees up address bus bandwidth by reducing the number of retries.

    摘要翻译: 一种用于在数据在第一耦合处理器的高速缓存中接收数据之前顺序耦合高速缓存行的连续处理器请求的方法。 同质和非均匀的操作彼此链接,并且一致性协议包括与链接状态相关联的几个新的中间一致性响应。 分配链接一致性状态以在第一处理器接收到数据之前跟踪处理器请求链和授予访问权限。 链接的一致性状态还标识接收处理器的地址。 当在链中的第一处理器的高速缓存处接收到数据时,处理器完成其对数据的(或与)数据的操作,然后将数据转发到链中的下一个处理器。 链接的一致性协议通过减少重试次数来释放地址总线带宽。

    System and Method for Reducing Unnecessary Cache Operations
    6.
    发明申请
    System and Method for Reducing Unnecessary Cache Operations 失效
    减少不必要的缓存操作的系统和方法

    公开(公告)号:US20070136535A1

    公开(公告)日:2007-06-14

    申请号:US11674960

    申请日:2007-02-14

    IPC分类号: G06F12/00

    摘要: A system and method for cache management in a data processing system. The data processing system includes a processor and a memory hierarchy. The memory hierarchy includes at least an upper memory cache, at least a lower memory cache, and a write-back data structure. In response to replacing data from the upper memory cache, the upper memory cache examines the write-back data structure to determine whether or not the data is present in the lower memory cache. If the data is present in the lower memory cache, the data is replaced in the upper memory cache without casting out the data to the lower memory cache.

    摘要翻译: 一种用于数据处理系统中缓存管理的系统和方法。 数据处理系统包括处理器和存储器层级。 存储器层级至少包括上部存储器高速缓存,至少下部存储器高速缓存和回写数据结构。 响应于从上部存储器高速缓存替换数据,上部存储器高速缓存检查回写数据结构以确定数据是否存在于下部存储器高速缓存中。 如果数据存在于较低存储器高速缓存中,则数据将在上部存储器高速缓存中替换,而不会将数据丢弃到较低的内存高速缓存。

    System and method of managing cache hierarchies with adaptive mechanisms
    7.
    发明申请
    System and method of managing cache hierarchies with adaptive mechanisms 失效
    用自适应机制管理缓存层次的系统和方法

    公开(公告)号:US20060277366A1

    公开(公告)日:2006-12-07

    申请号:US11143328

    申请日:2005-06-02

    IPC分类号: G06F12/00

    摘要: A system and method of managing cache hierarchies with adaptive mechanisms. A preferred embodiment of the present invention includes, in response to selecting a data block for eviction from a memory cache (the source cache) out of a collection of memory caches, examining a data structure to determine whether an entry exists that indicates that the data block has been evicted from the source memory cache, or another peer cache, to a slower cache or memory and subsequently retrieved from the slower cache or memory into the source memory cache or other peer cache. Also, a preferred embodiment of the present invention includes, in response to determining the entry exists in the data structure, selecting a peer memory cache out of the collection of memory caches at the same level in the hierarchy to receive the data block from the source memory cache upon eviction.

    摘要翻译: 一种使用自适应机制管理缓存层次结构的系统和方法。 本发明的优选实施例包括响应于从存储器高速缓存的集合中的存储器高速缓存(源高速缓存)中选择用于逐出的数据块,检查数据结构以确定是否存在指示数据 块已经从源存储器高速缓存或另一个对等缓存驱逐到较慢的高速缓存或存储器,并随后从较慢的高速缓存或存储器检索到源存储器高速缓存或其他对等高速缓存。 此外,本发明的优选实施例包括响应于确定条目存在于数据结构中,从层级中的相同级别的存储器高速缓存的集合中选择对等存储器高速缓存以从源接收数据块 内存缓存被驱逐。

    System and method for reducing unnecessary cache operations
    8.
    发明授权
    System and method for reducing unnecessary cache operations 失效
    减少不必要的缓存操作的系统和方法

    公开(公告)号:US07698508B2

    公开(公告)日:2010-04-13

    申请号:US11674960

    申请日:2007-02-14

    IPC分类号: G06F12/00

    摘要: A system and method for cache management in a data processing system. The data processing system includes a processor and a memory hierarchy. The memory hierarchy includes at least an upper memory cache, at least a lower memory cache, and a write-back data structure. In response to replacing data from the upper memory cache, the upper memory cache examines the write-back data structure to determine whether or not the data is present in the lower memory cache. If the data is present in the lower memory cache, the data is replaced in the upper memory cache without casting out the data to the lower memory cache.

    摘要翻译: 一种用于数据处理系统中缓存管理的系统和方法。 数据处理系统包括处理器和存储器层级。 存储器层级至少包括上部存储器高速缓存,至少下部存储器高速缓存和回写数据结构。 响应于从上部存储器高速缓存替换数据,上部存储器高速缓存检查回写数据结构以确定数据是否存在于下部存储器高速缓存中。 如果数据存在于较低存储器高速缓存中,则数据将在上部存储器高速缓存中替换,而不会将数据丢弃到较低的内存高速缓存。

    Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response
    9.
    发明授权
    Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response 失效
    链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行进行顺序同步访问

    公开(公告)号:US07370155B2

    公开(公告)日:2008-05-06

    申请号:US11245313

    申请日:2005-10-06

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0822

    摘要: A method and data processing system for sequentially coupling successive, homogenous processor requests for a cache line in a chain before the data is received in the cache of a first processor within the chain. Chained intermediate coherency states are assigned to track the chain of processor requests and subsequent access permission provided, prior to receipt of the data at the first processor starting the chain. The chained intermediate coherency state assigned identifies the processor operation and a directional identifier identifies the processor to which the cache line is to be forwarded. When the data is received at the cache of the first processor within the chain, the first processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chain is immediately stopped when a non-homogenous operation is snooped by the last-in-chain processor.

    摘要翻译: 一种方法和数据处理系统,用于在数据在链中的第一处理器的高速缓存中接收之前,将链接中的高速缓存行的连续的均匀处理器请求顺序耦合。 分配链接的中间一致性状态,以便在启动链路的第一个处理器接收到数据之前跟踪处理器请求链和后续访问权限。 所分配的链接中间一致性状态标识处理器操作,并且方向标识符标识要向其转发高速缓存行的处理器。 当在链中的第一处理器的高速缓存处接收数据时,第一处理器完成其数据处理(或与数据)的操作,然后将数据转发到链中的下一个处理器。 当最后一个链接处理器窥探非均匀操作时,链条立即停止。

    Just-In-Time Prefetching
    10.
    发明申请
    Just-In-Time Prefetching 失效
    即时预取

    公开(公告)号:US20070283101A1

    公开(公告)日:2007-12-06

    申请号:US11422459

    申请日:2006-06-06

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0862

    摘要: A method and an apparatus for performing just-in-time data prefetching within a data processing system comprising a processor, a cache or prefetch buffer, and at least one memory storage device. The apparatus comprises a prefetch engine having means for issuing a data prefetch request for prefetching a data cache line from the memory storage device for utilization by the processor. The apparatus further comprises logic/utility for dynamically adjusting a prefetch distance between issuance by the prefetch engine of the data prefetch request and issuance by the processor of a demand (load request) targeting the data/cache line being returned by the data prefetch request, so that a next data prefetch request for a subsequent cache line completes the return of the data/cache line at effectively the same time that a demand for that subsequent data/cache line is issued by the processor.

    摘要翻译: 一种用于在包括处理器,高速缓存或预取缓冲器的数据处理系统中执行即时数据预取的方法和装置,以及至少一个存储器存储装置。 该装置包括预取引擎,具有用于发出数据预取请求的装置,用于从存储器存储装置预取数据高速缓存行以供处理器利用。 该装置还包括逻辑/实用程序,用于动态地调整数据预取请求的预取引擎的发布之间的预取距离,并且由处理器发出针对由数据预取请求返回的数据/高速缓存线的需求(加载请求) 使得对于后续高速缓存行的下一个数据预取请求在处理器发出对后续数据/高速缓存行的请求的同时有效地完成数据/高速缓存行的返回。