Modified-invalid cache state to reduce cache-to-cache data transfer operations for speculatively-issued full cache line writes
    1.
    发明授权
    Modified-invalid cache state to reduce cache-to-cache data transfer operations for speculatively-issued full cache line writes 失效
    修改的无效缓存状态,以减少用于推测发出的全缓存行写入的缓存到高速缓存数据传输操作

    公开(公告)号:US07284097B2

    公开(公告)日:2007-10-16

    申请号:US10675744

    申请日:2003-09-30

    IPC分类号: G06F12/00

    摘要: A cache coherency protocol that includes a modified-invalid (Mi) state, which enables execution of a DMA Claim or DClaim operation to assign sole ownership of a cache line to a device that is going to overwrite the entire cache line without cache-to-cache data transfer. The protocol enables completion of speculatively-issued full cache line writes without requiring cache-to-cache transfer of data on the data bus during a preceding DMA Claim or DClaim operation. The modified-invalid (Mi) state assigns sole ownership of the cache line to an I/O device that has speculatively-issued a DMA Write or a processor that has speculatively-issued a DCBZ operation to overwrite the entire cache line, and the Mi state prevents data being sent to the cache line from another cache since the data will most probably be overwritten.

    摘要翻译: 包括修改无效(Mi)状态的高速缓存一致性协议,其使得能够执行DMA声明或DClaim操作以将高速缓存行的唯一所有权分配给要覆盖整个高速缓存行的设备,而不进行高速缓存 - 缓存数据传输。 该协议允许完成推测发出的完整高速缓存行写入,而不需要在先前的DMA声明或DClaim操作期间在数据总线上缓存到高速缓存传输数据。 修改无效(Mi)状态将高速缓存行的唯一所有权分配给推测性地发出DMA写入的I / O设备或者推测发出DCBZ操作以覆盖整个高速缓存行的处理器,并且将Mi 状态可防止将数据从另一个缓存发送到高速缓存行,因为数据最有可能被覆盖。

    Enhanced multiprocessor response bus protocol enabling intra-cache line reference exchange
    2.
    发明授权
    Enhanced multiprocessor response bus protocol enabling intra-cache line reference exchange 失效
    增强型多处理器响应总线协议,实现高速缓存行内参考交换

    公开(公告)号:US06704843B1

    公开(公告)日:2004-03-09

    申请号:US09696890

    申请日:2000-10-26

    IPC分类号: G06F1208

    CPC分类号: G06F12/0831

    摘要: System bus snoopers within a multiprocessor system in which dynamic application sequence behavior information is maintained within cache directories append the dynamic application sequence behavior information for the target cache line to their snoop responses. The system controller, which may also maintain dynamic application sequence behavior information in a history directory, employs the available dynamic application sequence behavior information to append “hints” to the combined response, appends the concatenated dynamic application sequence behavior information to the combined response, or both. Either the hints or the dynamic application sequence behavior information may be employed by the bus master and other snoopers in cache management.

    摘要翻译: 在多处理器系统内的系统总线监听器,其中动态应用程序行为信息保存在高速缓存目录中,将目标缓存行的动态应用程序序列行为信息附加到其监听响应。 也可以在历史目录中维护动态应用序列行为信息的系统控制器使用可用的动态应用序列行为信息来向组合响应附加“提示”,将连接的动态应用序列行为信息附加到组合响应,或 都。 在高速缓存管理中,总线主控和其他侦听器可以使用提示或动态应用序列行为信息。

    Optimized cache allocation algorithm for multiple speculative requests
    4.
    发明授权
    Optimized cache allocation algorithm for multiple speculative requests 失效
    针对多个推测请求的优化缓存分配算法

    公开(公告)号:US06393528B1

    公开(公告)日:2002-05-21

    申请号:US09345714

    申请日:1999-06-30

    IPC分类号: G06F1200

    CPC分类号: G06F12/0862 G06F12/127

    摘要: A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.

    摘要翻译: 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器hiearchy请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已被满足,则包含先前预取值中的一个的高速缓存行中的高速缓存行被分配用于接收另一个预取值 。

    Time based mechanism for cached speculative data deallocation
    6.
    发明授权
    Time based mechanism for cached speculative data deallocation 失效
    缓存的推测数据释放的基于时间的机制

    公开(公告)号:US06510494B1

    公开(公告)日:2003-01-21

    申请号:US09345716

    申请日:1999-06-30

    IPC分类号: G06F1208

    摘要: A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the pref etch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit.

    摘要翻译: 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用pref蚀刻请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联。

    Method for instruction extensions for a tightly coupled speculative request unit
    9.
    发明授权
    Method for instruction extensions for a tightly coupled speculative request unit 有权
    紧耦合推测请求单元的指令扩展方法

    公开(公告)号:US06421763B1

    公开(公告)日:2002-07-16

    申请号:US09345642

    申请日:1999-06-30

    IPC分类号: G06F1208

    摘要: A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value. The prefetch limit of cache usage may be established with a maximum number of sets in a congruence class usable by the requesting processing unit. A flag in a directory of the cache may be set to indicate that the prefetch value was retrieved as the result of a prefetch operation. In the implementation wherein the cache is a multi-level cache, a second flag in the cache directory may be set to indicate that the prefetch value has been sourced to an upstream cache. A cache line containing prefetch data can be automatically invalidated after a preset amount of time has passed since the prefetch value was requested.

    摘要翻译: 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。 高速缓存使用的预取限制可以由请求处理单元可用的同余类中的最大数量的集合来建立。 高速缓存目录中的标志可以被设置为指示作为预取操作的结果检索预取值。 在其中缓存是多级高速缓存的实现中,高速缓存目录中的第二标志可以被设置为指示预取值已经被提供给上游高速缓存。 包含预取数据的缓存行可以在从请求预取值开始经过预设的时间后自动失效。

    High performance data processing system via cache victimization protocols
    10.
    发明授权
    High performance data processing system via cache victimization protocols 失效
    高性能数据处理系统通过缓存受害协议

    公开(公告)号:US06721853B2

    公开(公告)日:2004-04-13

    申请号:US09895232

    申请日:2001-06-29

    IPC分类号: G06F1208

    CPC分类号: G06F12/0813

    摘要: A cache controller for a processor in a remote node of a system bus in a multiway multiprocessor link sends out a cache deallocate address transaction (CDAT) for a given cache line when that cache line is flushed and information from memory in a home node is no longer deemed valid for that cache line of that remote node processor. A local snoop of that CDAT transaction is then performed as a background function by other processors in the same remote node. If the snoop results indicate that same information is valid in another cache, and that cache decides it better to keep it valid in that remote node, then the information remains there. If the snoop results indicate that the information is not valid among caches in that remote node, or will be flushed due to the CDAT, the system memory directory in the home node of the multiprocessor link is notified and changes state in response to this. The system has higher performance due to the cache line maintenance functions being performed in the background rather than based on mainstream demand.

    摘要翻译: 用于多路多处理器链路中的系统总线的远程节点中的处理器的高速缓存控制器在刷新该高速缓存行并且来自主节点中的存储器的信息为否的时候发送用于给定高速缓存行的缓存解除分配地址事务(CDAT) 较长时间被认为对该远程节点处理器的该缓存行有效。 然后,该同一远程节点中的其他处理器将执行该CDAT事务的本地侦听作为后台功能。 如果窥探结果表明相同的信息在另一个缓存中有效,并且该缓存决定更好地将其保留在该远程节点中,则该信息将保留在该位置。 如果窥探结果表明信息在该远程节点的高速缓存中无效,或由于CDAT而被刷新,则通知多处理器链路的家庭节点中的系统内存目录并响应于此改变状态。 该系统具有更高的性能,因为高速缓存行维护功能在后台执行,而不是基于主流需求。