Extended cache coherency protocol with a modified store instruction lock release indicator
    71.
    发明授权
    Extended cache coherency protocol with a modified store instruction lock release indicator 失效
    扩展缓存一致性协议,具有修改后的存储指令锁定释放指示器

    公开(公告)号:US06625701B1

    公开(公告)日:2003-09-23

    申请号:US09437183

    申请日:1999-11-09

    IPC分类号: G06F1214

    CPC分类号: G06F12/0811 G06F12/0815

    摘要: A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. In particular, as multiple processors compete for the same cache line, a significant amount of processor time is lost determining if another processor's cache line lock has been released and attempting to reserve that cache line while it is still owned by the other processor. The preferred embodiment provides an indicator bit with the cache store command which specifically indicates whether the store also acts as a lock-release.

    摘要翻译: 多处理器数据处理系统需要仔细管理以保持高速缓存一致性。 使用MESI方法的常规系统通过低效的锁定采集和锁定保留技术来牺牲一些性能。 所公开的系统提供附加的高速缓存状态,指示符位和锁定采集例程以提高高速缓存性能。 特别地,由于多个处理器竞争相同的高速缓存行,所以丢失了大量的处理器时间,这确定了另一个处理器的高速缓存行锁定是否已被释放,并尝试在该另一个处理器仍然拥有的情况下保留该高速缓存行。 优选实施例提供具有高速缓存存储命令的指示符位,其特别地指示存储还是否用作锁定释放。

    Cache coherency protocol employing a read operation including a programmable flag to indicate deallocation of an intervened cache line
    72.
    发明授权
    Cache coherency protocol employing a read operation including a programmable flag to indicate deallocation of an intervened cache line 有权
    缓存一致性协议采用包括可编程标志的读取操作来指示干预的高速缓存行的解除分配

    公开(公告)号:US06345342B1

    公开(公告)日:2002-02-05

    申请号:US09437177

    申请日:1999-11-09

    IPC分类号: G06F1200

    CPC分类号: G06F12/0831

    摘要: A novel cache coherency protocol provides a modified-unsolicited (Mu) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the Mu state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The Mu state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the Mu state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.

    摘要翻译: 一种新颖的高速缓存一致性协议提供修改的非请求(Mu)高速缓存状态,以指示保持在高速缓存行中的值已经被修改(即,当前不符合系统存储器),但是被另一个处理单元修改,而不是由 与当前包含Mu状态的值的高速缓存相关联的处理单元,并且该值被保持为排除任何其他水平相邻的高速缓存。 因为该值是唯一保留的,所以可以在该高速缓存中修改该值,而不需要向存储器层级中的其他水平高速缓存发出总线事务。 作为对读取请求的窥探响应的结果,可以应用Mu状态。 读取请求可以包括用于指示请求的高速缓存能够利用Mu状态的标志。 或者,可以向标记提供干预数据,以指示请求的高速缓存应该利用修改的未经请求的状态。

    High performance multiprocessor system with modified-unsolicited cache state
    73.
    发明授权
    High performance multiprocessor system with modified-unsolicited cache state 失效
    具有修改的主动缓存状态的高性能多处理器系统

    公开(公告)号:US06321306B1

    公开(公告)日:2001-11-20

    申请号:US09437179

    申请日:1999-11-09

    IPC分类号: G06F1212

    CPC分类号: G06F12/0831

    摘要: A novel cache coherency protocol provides a modified-unsolicited (MU) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the MU state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The MU state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the MU state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.

    摘要翻译: 一种新颖的高速缓存一致性协议提供修改的非请求(MU)高速缓存状态,以指示保持在高速缓存行中的值已被修改(即,当前不符合系统存储器),但是被另一个处理单元修改,而不是由 与当前包含MU状态的值的高速缓存相关联的处理单元,并且该值被保持为任何其他水平相邻的高速缓存。 因为该值是唯一保留的,所以可以在该高速缓存中修改该值,而不需要向存储器层级中的其他水平高速缓存发出总线事务。 作为对读取请求的窥探响应的结果,可以应用MU状态。 读取请求可以包括用于指示请求的高速缓存能够利用MU状态的标志。 或者,可以向标记提供干预数据,以指示请求的高速缓存应该利用修改的未经请求的状态。

    Chaining multiple smaller store queue entries for more efficient store queue usage
    75.
    发明授权
    Chaining multiple smaller store queue entries for more efficient store queue usage 有权
    链接多个较小的存储队列条目,以实现更高效的存储队列使用

    公开(公告)号:US08166246B2

    公开(公告)日:2012-04-24

    申请号:US12023600

    申请日:2008-01-31

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    CPC分类号: G06F12/0893 G06F12/0815

    摘要: A computer implemented method, a processor chip, a data processing system, and computer program product in a data processing system process information in a store cache of a data processing system. The store cache receives a first entry that includes a first address indicating a first segment of a cache line. The store cache then receives a second entry including a second address indicating a second segment of the cache line. Responsive to the first segment not being equal to the second segment, the first entry is chained to the second entry.

    摘要翻译: 数据处理系统中的计算机实现方法,处理器芯片,数据处理系统和计算机程序产品,处理数据处理系统的存储高速缓存中的信息。 存储高速缓存接收包括指示高速缓存行的第一段的第一地址的第一条目。 存储高速缓存然后接收包括指示高速缓存行的第二段的第二地址的第二条目。 响应于第一段不等于第二段,第一个条目链接到第二个条目。

    Information handling system with immediate scheduling of load operations and fine-grained access to cache memory
    76.
    发明授权
    Information handling system with immediate scheduling of load operations and fine-grained access to cache memory 有权
    信息处理系统,可立即调度加载操作,并对缓存进行细粒度访问

    公开(公告)号:US08140756B2

    公开(公告)日:2012-03-20

    申请号:US12424332

    申请日:2009-04-15

    IPC分类号: G06F13/00

    CPC分类号: G06F12/0822

    摘要: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. The control logic determines the size requirement of each load operation or store operation. When the cache memory system performs a store operation or load operation, the memory system accesses the portion of a cache line it needs to perform the operation instead of accessing an entire cache line.

    摘要翻译: 信息处理系统(IHS)包括具有高速缓冲存储器系统的处理器。 处理器包括具有耦合到L2高速缓冲存储器的L1高速缓冲存储器的处理器核心。 处理器包括仲裁机制,其控制对L2高速缓冲存储器的加载和存储请求。 仲裁机制包括控制逻辑,其允许加载请求中断L2高速缓冲存储器当前正在服务的存储请求。 当L2高速缓存存储器完成对中断加载请求的服务时,L2高速缓冲存储器可以在中断点返回服务中断的存储请求。 控制逻辑确定每个加载操作或存储操作的大小要求。 当高速缓冲存储器系统执行存储操作或加载操作时,存储器系统访问它需要执行操作的高速缓存行的部分,而不是访问整个高速缓存行。

    INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH DUAL DISPATCH INTO WRITE/READ DATA FLOW
    77.
    发明申请
    INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH DUAL DISPATCH INTO WRITE/READ DATA FLOW 有权
    信息处理系统具有立即调度双币种缓存中的负载操作,双重分配到写/读数据流

    公开(公告)号:US20100268887A1

    公开(公告)日:2010-10-21

    申请号:US12424255

    申请日:2009-04-15

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0897 G06F12/0811

    摘要: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory.

    摘要翻译: 信息处理系统(IHS)包括具有高速缓冲存储器系统的处理器。 处理器包括具有耦合到L2高速缓冲存储器的L1高速缓冲存储器的处理器核心。 处理器包括仲裁机制,其控制对L2高速缓冲存储器的加载和存储请求。 仲裁机制包括控制逻辑,其允许加载请求中断L2高速缓冲存储器当前正在服务的存储请求。 L2高速缓冲存储器包括双数据库,使得一个存储体可以执行加载操作,而另一个存储体执行存储操作。 缓存系统向数据流提供双调度点到二级高速缓冲存储器的双缓存组。

    LOAD REQUEST SCHEDULING IN A CACHE HIERARCHY
    78.
    发明申请
    LOAD REQUEST SCHEDULING IN A CACHE HIERARCHY 有权
    缓存中的加载请求调度

    公开(公告)号:US20100268882A1

    公开(公告)日:2010-10-21

    申请号:US12424207

    申请日:2009-04-15

    IPC分类号: G06F12/08 G06F12/12

    摘要: A system and method for tracking core load requests and providing arbitration and ordering of requests. When a core interface unit (CIU) receives a load operation from the processor core, a new entry in allocated in a queue of the CIU. In response to allocating the new entry in the queue, the CIU detects contention between the load request and another memory access request. In response to detecting contention, the load request may be suspended until the contention is resolved. Received load requests may be stored in the queue and tracked using a least recently used (LRU) mechanism. The load request may then be processed when the load request resides in a least recently used entry in the load request queue. CIU may also suspend issuing an instruction unless a read claim (RC) machine is available. In another embodiment, CIU may issue stored load requests in a specific priority order.

    摘要翻译: 用于跟踪核心负载请求并提供仲裁和请求排序的系统和方法。 当核心接口单元(CIU)从处理器核心接收到加载操作时,分配在CIU队列中的新条目。 响应于在队列中分配新条目,CIU检测加载请求和另一个存储器访问请求之间的争用。 响应于检测到争用,负载请求可以被暂停,直到争用被解决。 接收到的加载请求可以存储在队列中,并使用最近最少使用的(LRU)机制进行跟踪。 然后可以在加载请求驻留在加载请求队列中最近最少使用的条目中时处理加载请求。 除非读取权利要求(RC)机器可用,否则CIU也可以暂停发出指令。 在另一个实施例中,CIU可以以特定优先级顺序发布存储的加载请求。

    L2 cache array topology for large cache with different latency domains
    79.
    发明授权
    L2 cache array topology for large cache with different latency domains 有权
    具有不同延迟域的大型缓存的L2缓存阵列拓扑

    公开(公告)号:US07783834B2

    公开(公告)日:2010-08-24

    申请号:US11947742

    申请日:2007-11-29

    IPC分类号: G06F12/00

    摘要: A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency. One set of wires oriented along a horizontal direction may be used to output the cache line, while another set of wires oriented along a vertical direction may be used for maintenance of the cache sectors. A given cache line is further preferably spread across sectors in different rows or cache ways. For example, a cache line can be 128 bytes and spread across four sectors in four different columns, each sector containing 32 bytes of the cache line, and the cache line is output over four successive clock cycles with one sector being transmitted during each of the four cycles.

    摘要翻译: 缓存存储器逻辑地将高速缓存行与高速缓存阵列的至少两个缓存扇区相关联,其中不同扇区具有不同的输出延迟,并且对于负载命中,基于它们的等待时间来选择性地启用高速缓存扇区以在连续的时钟周期上输出高速缓存行 。 优选使用具有较高传输速度的较大导线来输出与所请求的存储块相对应的高速缓存行。 在说明性实施例中,高速缓存器配置有高速缓存扇区的行和列,并且给定的高速缓存行分布在不同列中的扇区之间,其中给定高速缓存行的至少一部分位于具有第一等待时间的第一列中 并且所述给定高速缓存行的另一部分位于具有大于所述第一等待时间的第二等待时间的第二列中。 可以使用沿水平方向定向的一组线来输出高速缓存线,而沿着垂直方向定向的另一组线可以用于高速缓存扇区的维护。 给定的高速缓存行进一步优选地分布在不同行或高速缓存方式的扇区之间。 例如,高速缓存行可以是128字节并且分布在四个不同列中的四个扇区上,每个扇区包含32个字节的高速缓存行,并且高速缓存行在四个连续的时钟周期内被输出,在每个 四个周期。

    Data processing system and method for efficient L3 cache directory management
    80.
    发明授权
    Data processing system and method for efficient L3 cache directory management 有权
    数据处理系统和方法,用于高效的L3缓存目录管理

    公开(公告)号:US07500065B2

    公开(公告)日:2009-03-03

    申请号:US11956102

    申请日:2007-12-13

    IPC分类号: G06F12/00

    摘要: A system and method for cache management in a data processing system having a memory hierarchy of upper memory and lower memory cache. A lower memory cache controller accesses a coherency state table to determine replacement policies of coherency states for cache lines present in the lower memory cache when receiving a cast-in request from one of the upper memory caches. The coherency state table implements a replacement policy that retains the more valuable cache coherency state information between the upper and lower memory caches for a particular cache line contained in both levels of memory at the time of cast-out from the upper memory cache.

    摘要翻译: 一种用于在具有上部存储器和下部存储器高速缓存的存储器层级的数据处理系统中的高速缓存管理的系统和方法。 较低的存储器高速缓存控制器访问一致性状态表以确定当从上部存储器高速缓存中的一个接收到转入请求时存在于下部存储器高速缓存中的高速缓存行的一致性状态的替换策略。 一致性状态表实现替换策略,其在从上部存储器高速缓存中拔出时,在包含在两个级别的存储器中的特定高速缓存行的上下存储器高速缓存之间保留更有价值的高速缓存一致性状态信息。