Bandwidth of a cache directory by slicing the cache directory into two smaller cache directories and replicating snooping logic for each sliced cache directory
    14.
    发明申请
    Bandwidth of a cache directory by slicing the cache directory into two smaller cache directories and replicating snooping logic for each sliced cache directory 有权
    缓存目录的带宽通过将缓存目录分成两个较小的缓存目录,并为每个切片缓存目录复制侦听逻辑

    公开(公告)号:US20060184747A1

    公开(公告)日:2006-08-17

    申请号:US11056721

    申请日:2005-02-11

    IPC分类号: G06F13/28 G06F12/00

    CPC分类号: G06F12/0831 G06F12/0851

    摘要: A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.

    摘要翻译: 用于提高缓存目录的窥探带宽的缓存,系统和方法。 高速缓存目录可以分成两个较小的缓存目录,每个具有自己的侦听逻辑。 通过具有可以同时访问的两个缓存目录,带宽可以基本上加倍。 此外,当从互连接收到窥探地址时,“频率匹配器”可以将周期速度转移到较低速度,从而将请求发送到调度管线的速率减慢。 每个调度流水线被耦合到一个切片缓存目录,并被配置为搜索该高速缓存目录以确定该接收到的地址上的数据是否被存储在该高速缓冲存储器中。 作为将请求发送到调度管线的速度变慢并且同时访问两个分片缓存目录的结果,可以提高缓存目录的带宽或吞吐量。

    Victim Cache Using Direct Intervention
    15.
    发明申请
    Victim Cache Using Direct Intervention 失效
    受害者缓存使用直接干预

    公开(公告)号:US20080046651A1

    公开(公告)日:2008-02-21

    申请号:US11923952

    申请日:2007-10-25

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0897 G06F12/127

    摘要: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.

    摘要翻译: 一种用于实现跨层级高速缓冲存储器的干预的方法,系统和设备。 在优选实施例中,响应于第一高速缓冲存储器中的高速缓存未命中,直接干预请求从第一高速缓存存储器发送到第二高速缓存存储器,请求满足高速缓存未命中的直接干预。 在替代实施例中,利用直接干预来访问相同级别的受害者缓存。

    Data processing system and method for efficient communication utilizing an Tn and Ten coherency states
    16.
    发明申请
    Data processing system and method for efficient communication utilizing an Tn and Ten coherency states 有权
    数据处理系统和利用Tn和10相​​关性状态的高效通信方法

    公开(公告)号:US20080040556A1

    公开(公告)日:2008-02-14

    申请号:US11835984

    申请日:2007-08-08

    IPC分类号: G06F12/08

    摘要: A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit. The first coherency domain includes a first cache memory and a second cache memory, and the second coherency domain includes a remote coherent cache memory. The first cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the memory block is possibly shared with the second cache memory in the first coherency domain and cached only within the first coherency domain.

    摘要翻译: 高速缓存一致数据处理系统至少包括第一和第二相关域,每个域包括至少一个处理单元。 第一相关域包括第一高速缓存存储器和第二高速缓冲存储器,并且第二相干域包括远程一致高速缓存存储器。 第一高速缓存存储器包括高速缓存控制器,包括用于高速缓存存储器块的数据存储位置的数据阵列和高速缓存目录。 缓存目录包括用于存储与存储器块相关联的地址标签的标签字段和与标签字段和数据存储位置相关联的一致性状态字段。 相关性状态字段具有多个可能的状态,包括指示存储器块可能与第一相关域中的第二高速缓冲存储器共享并且仅在第一相干域内缓存的状态。

    Data processing system, method and interconnect fabric for partial response accumulation in a data processing system
    17.
    发明申请
    Data processing system, method and interconnect fabric for partial response accumulation in a data processing system 失效
    数据处理系统,数据处理系统部分响应累积的方法和互连结构

    公开(公告)号:US20060179272A1

    公开(公告)日:2006-08-10

    申请号:US11055297

    申请日:2005-02-10

    IPC分类号: G06F15/00

    CPC分类号: G06F13/385 G06F9/546

    摘要: A data processing system includes a plurality of processing units each having a respective point-to-point communication link with each of multiple others of the plurality of processing units but fewer than all of the plurality of processing units. Each of the plurality of processing units includes interconnect logic, coupled to each point-to-point communication link of that processing unit, that broadcasts requests received from one of the multiple others of the plurality of processing units to one or more of the plurality of processing units. The interconnect logic includes a partial response data structure including a plurality of entries each associating a partial response field with a plurality of flags respectively associated with each processing unit containing a snooper from which that processing unit will receive a partial response. The interconnect logic accumulates partial responses of processing units by reference to the partial response field to obtain an accumulated partial response, and when the plurality of flags indicate that all processing units from which partial responses are expected have returned a partial response, outputs the accumulated partial response.

    摘要翻译: 数据处理系统包括多个处理单元,每个处理单元各自具有与多个处理单元中的多个其他处理单元中的每一个相对的点对点通信链路,但是比所有多个处理单元少。 多个处理单元中的每一个包括互连逻辑,其耦合到该处理单元的每个点对点通信链路,其将从多个处理单元中的多个其中一个的接收的请求广播到多个处理单元中的一个或多个 处理单位。 互连逻辑包括部分响应数据结构,其包括多个条目,每个条目将部分响应字段与分别与包含窥探者的每个处理单元相关联的多个标志相关联,该处理单元将从该处理单元接收部分响应。 互连逻辑通过参考部分响应字段积累处理单元的部分响应以获得累积的部分响应,并且当多个标志指示预期部分响应的所有处理单元已经返回部分响应时,输出累积的部分响应 响应。

    Data processing system, method and interconnect fabric supporting destination data tagging
    18.
    发明申请
    Data processing system, method and interconnect fabric supporting destination data tagging 有权
    数据处理系统,方法和互连结构支持目标数据标记

    公开(公告)号:US20060179254A1

    公开(公告)日:2006-08-10

    申请号:US11055405

    申请日:2005-02-10

    IPC分类号: G06F13/28

    CPC分类号: G06F15/16

    摘要: A data processing system includes a plurality of communication links and a plurality of processing units including a local master processing unit. The local master processing unit includes interconnect logic that couples the processing unit to one or more of the plurality of communication links and an originating master coupled to the interconnect logic. The originating master originates an operation by issuing a write-type request on at least one of the one or more communication links, receives from a snooper in the data processing system a destination tag identifying a route to the snooper, and, responsive to receipt of the combined response and the destination tag, initiates a data transfer including a data payload and a data tag identifying the route provided within the destination tag.

    摘要翻译: 数据处理系统包括多个通信链路和包括本地主处理单元的多个处理单元。 本地主处理单元包括将处理单元耦合到多个通信链路中的一个或多个以及耦合到互连逻辑的始发主机的互连逻辑。 始发主机通过在一个或多个通信链路中的至少一个发出写入请求来发起操作,从数据处理系统中的窥探者接收标识到窥探者的路由的目的地标签,并且响应于接收到 组合响应和目的地标签,发起包括数据有效载荷和标识目的地标签内提供的路由的数据标签的数据传输。

    Data processing system, method and interconnect fabric that protect ownership transfer with a protection window extension
    19.
    发明申请
    Data processing system, method and interconnect fabric that protect ownership transfer with a protection window extension 审中-公开
    数据处理系统,方法和互连结构,通过保护窗口扩展来保护所有权转移

    公开(公告)号:US20060179253A1

    公开(公告)日:2006-08-10

    申请号:US11054841

    申请日:2005-02-10

    IPC分类号: G06F13/28

    摘要: A data processing system includes a memory system, a plurality of masters that issue requests for access to memory blocks within the memory system, a plurality of snoopers that provide partial responses to requests by the masters, and response logic that generates combined responses for the requests in response to the partial responses provided by the plurality of snoopers. The plurality masters includes a winning master that issues a request for a particular memory block, and the plurality of snoopers includes a protecting snooper that, in response to receipt of the request, provides a partial response and protects a transfer of coherency ownership of the particular memory block to the winning master until expiration of a protection window extension following receipt from the response logic of a combined response for the request.

    摘要翻译: 数据处理系统包括存储器系统,发出对存储器系统内的存储器块的访问请求的多个主器件,提供对由主器件发出的请求的部分响应的多个监视器以及产生针对请求的组合响应的响应逻辑 响应于由多个窥探者提供的部分响应。 多个主人包括发出对特定存储块的请求的获胜主,并且多个窥探者包括保护窥探者,其响应于该请求的接收而提供部分响应并保护特定存储块的一致性所有权的转移 记忆块到获胜主机,直到从请求的组合响应的响应逻辑接收到保护窗口扩展之后到期。

    Half-good mode for large L2 cache array topology with different latency domains
    20.
    发明申请
    Half-good mode for large L2 cache array topology with different latency domains 有权
    具有不同延迟域的大型L2缓存阵列拓扑的半好模式

    公开(公告)号:US20060179230A1

    公开(公告)日:2006-08-10

    申请号:US11055262

    申请日:2005-02-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0851 G06F12/126

    摘要: A cache memory logically partitions a cache array into at least two slices each having a plurality of cache lines, with a given cache line spread across two or more cache ways of contiguous bytes and a given cache way shared between the two cache slices, and if one a cache way is defective that is part of a first cache line in the first cache slice and part of a second cache line in the second cache slice, it is disabled while continuing to use at least one other cache way which is also part of the first cache line and part of the second cache line. In the illustrative embodiment the cache array is set associative and at least two different cache ways for a given cache line contain different congruence classes for that cache line. The defective cache way can be disabled by preventing an eviction mechanism from allocating any congruence class in the defective way. For example, half of the cache line can be disabled (i.e., half of the congruence classes). The cache array may be arranged with rows and columns of cache sectors (rows corresponding to the cache ways) wherein a given cache line is further spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array can also output different sectors of the given cache line in successive clock cycles based on the latency of a given sector.

    摘要翻译: 高速缓存存储器将高速缓存阵列逻辑地分区成至少两个切片,每个切片具有多个高速缓存行,其中给定的高速缓存行分布在连续字节的两个或多个高速缓存路径上以及在两个高速缓存片之间共享的给定高速缓存路径,如果 一个缓存方式是缺陷,其是第一高速缓存片中的第一高速缓存行和第二高速缓存片中的第二高速缓存行的一部分的一部分,其被禁用,同时继续使用至少一种其他高速缓存方式,其也是 第一个缓存行和第二个缓存行的一部分。 在说明性实施例中,高速缓存阵列被设置为关联性,并且给定高速缓存行的至少两个不同的高速缓存路径包含该高速缓存行的不同的一致类。 可以通过防止驱逐机制以有缺陷的方式分配任何一致类来禁用缺陷缓存方式。 例如,可以禁用一半的高速缓存行(即,一致等级的一半)。 高速缓存阵列可以被布置成具有行和列的高速缓存扇区(对应于高速缓存路线的行),其中给定高速缓存行进一步分布在不同行和列中的扇区之间,其中给定高速缓存行的至少一部分位于 具有第一延迟的第一列和给定高速缓存行的另一部分位于具有大于第一等待时间的第二等待时间的第二列中。 缓存阵列还可以基于给定扇区的等待时间在连续的时钟周期中输出给定高速缓存行的不同扇区。