Victim Cache Using Direct Intervention
    51.
    发明申请
    Victim Cache Using Direct Intervention 失效
    受害者缓存使用直接干预

    公开(公告)号:US20080046651A1

    公开(公告)日:2008-02-21

    申请号:US11923952

    申请日:2007-10-25

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0897 G06F12/127

    摘要: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.

    摘要翻译: 一种用于实现跨层级高速缓冲存储器的干预的方法,系统和设备。 在优选实施例中,响应于第一高速缓冲存储器中的高速缓存未命中,直接干预请求从第一高速缓存存储器发送到第二高速缓存存储器,请求满足高速缓存未命中的直接干预。 在替代实施例中,利用直接干预来访问相同级别的受害者缓存。

    System and method of responding to a cache read error with a temporary cache directory column delete
    52.
    发明申请
    System and method of responding to a cache read error with a temporary cache directory column delete 审中-公开
    使用临时缓存目录列删除缓存读取错误的系统和方法

    公开(公告)号:US20070022250A1

    公开(公告)日:2007-01-25

    申请号:US11184343

    申请日:2005-07-19

    IPC分类号: G06F12/00

    摘要: A system and method of responding to a cache read error with a temporary cache directory column delete. A read command is received at a cache controller. In response to determining that data requested by said read command is stored in a specific data location in the cache, a read of the data is initiated. In response to determining the read of said data results in an error, a column delete indicator for an associativity class including a specific data location to temporarily prevent allocation within the associativity class of storage locations is set. A specific line delete command that marks the specific data location as deleted is issued. In response to the issuing of the specific line delete command, the column delete indicator for the associativity class, such that storage locations within the associativity class other than the specific data location can again be allocated to hold new data is set.

    摘要翻译: 使用临时高速缓存目录列删除来响应缓存读取错误的系统和方法。 在高速缓存控制器处接收读命令。 响应于确定由所述读取命令请求的数据被存储在高速缓存中的特定数据位置中,开始读取数据。 响应于确定所述数据的读取导致错误,设置用于包括特定数据位置的关联性类的列删除指示符,以临时阻止存储位置的关联性类别内的分配。 发出将特定数据位置标记为已删除的特定行删除命令。 响应于发出特定行删除命令,设置关联性类的列删除指示符,使得可以再次分配除特定数据位置之外的关联性类中的存储位置以保存新数据。

    Data processing system, method and interconnect fabric for synchronized communication in a data processing system
    53.
    发明申请
    Data processing system, method and interconnect fabric for synchronized communication in a data processing system 失效
    数据处理系统,方法和互连结构,用于数据处理系统中的同步通信

    公开(公告)号:US20060179337A1

    公开(公告)日:2006-08-10

    申请号:US11055299

    申请日:2005-02-10

    IPC分类号: G06F1/04 G06F1/12 G06F15/16

    CPC分类号: G06F15/16

    摘要: A data processing system includes a plurality of processing units, including at least a local master and a local hub, which are coupled for communication via a communication link. The local master includes a master capable of initiating an operation, a snooper capable of receiving an operation, and interconnect logic coupled to a communication link coupling the local master to the local hub. The interconnect logic includes request logic that synchronizes internal transmission of a request of the master to the snooper with transmission, via the communication link, of the request to the local hub.

    摘要翻译: 数据处理系统包括多个处理单元,至少包括本地主站和本地集线器,其经由通信链路进行通信。 本地主机包括能够启动操作的主机,能够接收操作的监听器,以及耦合到将本地主机耦合到本地集线器的通信链路的逻辑互连。 互连逻辑包括请求逻辑,其将主机的请求的内部传输与通过通信链路传送到本地集线器的请求同步到窥探者的请求逻辑。

    Data processing system, method and interconnect fabric for partial response accumulation in a data processing system
    54.
    发明申请
    Data processing system, method and interconnect fabric for partial response accumulation in a data processing system 失效
    数据处理系统,数据处理系统部分响应累积的方法和互连结构

    公开(公告)号:US20060179272A1

    公开(公告)日:2006-08-10

    申请号:US11055297

    申请日:2005-02-10

    IPC分类号: G06F15/00

    CPC分类号: G06F13/385 G06F9/546

    摘要: A data processing system includes a plurality of processing units each having a respective point-to-point communication link with each of multiple others of the plurality of processing units but fewer than all of the plurality of processing units. Each of the plurality of processing units includes interconnect logic, coupled to each point-to-point communication link of that processing unit, that broadcasts requests received from one of the multiple others of the plurality of processing units to one or more of the plurality of processing units. The interconnect logic includes a partial response data structure including a plurality of entries each associating a partial response field with a plurality of flags respectively associated with each processing unit containing a snooper from which that processing unit will receive a partial response. The interconnect logic accumulates partial responses of processing units by reference to the partial response field to obtain an accumulated partial response, and when the plurality of flags indicate that all processing units from which partial responses are expected have returned a partial response, outputs the accumulated partial response.

    摘要翻译: 数据处理系统包括多个处理单元,每个处理单元各自具有与多个处理单元中的多个其他处理单元中的每一个相对的点对点通信链路,但是比所有多个处理单元少。 多个处理单元中的每一个包括互连逻辑,其耦合到该处理单元的每个点对点通信链路,其将从多个处理单元中的多个其中一个的接收的请求广播到多个处理单元中的一个或多个 处理单位。 互连逻辑包括部分响应数据结构,其包括多个条目,每个条目将部分响应字段与分别与包含窥探者的每个处理单元相关联的多个标志相关联,该处理单元将从该处理单元接收部分响应。 互连逻辑通过参考部分响应字段积累处理单元的部分响应以获得累积的部分响应,并且当多个标志指示预期部分响应的所有处理单元已经返回部分响应时,输出累积的部分响应 响应。

    Data processing system, method and interconnect fabric supporting destination data tagging
    55.
    发明申请
    Data processing system, method and interconnect fabric supporting destination data tagging 有权
    数据处理系统,方法和互连结构支持目标数据标记

    公开(公告)号:US20060179254A1

    公开(公告)日:2006-08-10

    申请号:US11055405

    申请日:2005-02-10

    IPC分类号: G06F13/28

    CPC分类号: G06F15/16

    摘要: A data processing system includes a plurality of communication links and a plurality of processing units including a local master processing unit. The local master processing unit includes interconnect logic that couples the processing unit to one or more of the plurality of communication links and an originating master coupled to the interconnect logic. The originating master originates an operation by issuing a write-type request on at least one of the one or more communication links, receives from a snooper in the data processing system a destination tag identifying a route to the snooper, and, responsive to receipt of the combined response and the destination tag, initiates a data transfer including a data payload and a data tag identifying the route provided within the destination tag.

    摘要翻译: 数据处理系统包括多个通信链路和包括本地主处理单元的多个处理单元。 本地主处理单元包括将处理单元耦合到多个通信链路中的一个或多个以及耦合到互连逻辑的始发主机的互连逻辑。 始发主机通过在一个或多个通信链路中的至少一个发出写入请求来发起操作,从数据处理系统中的窥探者接收标识到窥探者的路由的目的地标签,并且响应于接收到 组合响应和目的地标签,发起包括数据有效载荷和标识目的地标签内提供的路由的数据标签的数据传输。

    Half-good mode for large L2 cache array topology with different latency domains
    56.
    发明申请
    Half-good mode for large L2 cache array topology with different latency domains 有权
    具有不同延迟域的大型L2缓存阵列拓扑的半好模式

    公开(公告)号:US20060179230A1

    公开(公告)日:2006-08-10

    申请号:US11055262

    申请日:2005-02-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0851 G06F12/126

    摘要: A cache memory logically partitions a cache array into at least two slices each having a plurality of cache lines, with a given cache line spread across two or more cache ways of contiguous bytes and a given cache way shared between the two cache slices, and if one a cache way is defective that is part of a first cache line in the first cache slice and part of a second cache line in the second cache slice, it is disabled while continuing to use at least one other cache way which is also part of the first cache line and part of the second cache line. In the illustrative embodiment the cache array is set associative and at least two different cache ways for a given cache line contain different congruence classes for that cache line. The defective cache way can be disabled by preventing an eviction mechanism from allocating any congruence class in the defective way. For example, half of the cache line can be disabled (i.e., half of the congruence classes). The cache array may be arranged with rows and columns of cache sectors (rows corresponding to the cache ways) wherein a given cache line is further spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array can also output different sectors of the given cache line in successive clock cycles based on the latency of a given sector.

    摘要翻译: 高速缓存存储器将高速缓存阵列逻辑地分区成至少两个切片,每个切片具有多个高速缓存行,其中给定的高速缓存行分布在连续字节的两个或多个高速缓存路径上以及在两个高速缓存片之间共享的给定高速缓存路径,如果 一个缓存方式是缺陷,其是第一高速缓存片中的第一高速缓存行和第二高速缓存片中的第二高速缓存行的一部分的一部分,其被禁用,同时继续使用至少一种其他高速缓存方式,其也是 第一个缓存行和第二个缓存行的一部分。 在说明性实施例中,高速缓存阵列被设置为关联性,并且给定高速缓存行的至少两个不同的高速缓存路径包含该高速缓存行的不同的一致类。 可以通过防止驱逐机制以有缺陷的方式分配任何一致类来禁用缺陷缓存方式。 例如,可以禁用一半的高速缓存行(即,一致等级的一半)。 高速缓存阵列可以被布置成具有行和列的高速缓存扇区(对应于高速缓存路线的行),其中给定高速缓存行进一步分布在不同行和列中的扇区之间,其中给定高速缓存行的至少一部分位于 具有第一延迟的第一列和给定高速缓存行的另一部分位于具有大于第一等待时间的第二等待时间的第二列中。 缓存阵列还可以基于给定扇区的等待时间在连续的时钟周期中输出给定高速缓存行的不同扇区。

    L2 cache controller with slice directory and unified cache structure

    公开(公告)号:US20060179229A1

    公开(公告)日:2006-08-10

    申请号:US11054924

    申请日:2005-02-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0851 G06F12/0811

    摘要: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry. The cache array may be arranged with rows and columns of cache sectors wherein a given cache line is spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array outputs different sectors of the given cache line in successive clock cycles based on the latency of a given sector.

    Data processing system, method and interconnect fabric supporting multiple planes of processing nodes
    58.
    发明申请
    Data processing system, method and interconnect fabric supporting multiple planes of processing nodes 有权
    支持多个处理节点平面的数据处理系统,方法和互连结构

    公开(公告)号:US20070081516A1

    公开(公告)日:2007-04-12

    申请号:US11245887

    申请日:2005-10-07

    IPC分类号: H04L12/28

    CPC分类号: G06F15/16

    摘要: A data processing system includes a first plane including a first plurality of processing nodes, each including multiple processing units, and a second plane including a second plurality of processing nodes, each including multiple processing units. The data processing system also includes a plurality of point-to-point first tier links. Each of the first plurality and second plurality of processing nodes includes one or more first tier links among the plurality of first tier links, where the first tier link(s) within each processing node connect a pair of processing units in the same processing node for communication. The data processing system further includes a plurality of point-to-point second tier links. At least a first of the plurality of second tier links connects processing units in different ones of the first plurality of processing nodes, at least a second of the plurality of second tier links connects processing units in different ones of the second plurality of processing nodes, and at least a third of the plurality of second tier links connects a processing unit in the first plane to a processing unit in the second plane.

    摘要翻译: 数据处理系统包括包括第一多个处理节点的第一平面,每个处理节点包括多个处理单元,以及包括第二多个处理节点的第二平面,每个处理节点包括多个处理单元。 数据处理系统还包括多个点对点第一层链路。 第一多个处理节点和第二多个处理节点中的每一个包括多个第一层链路之中的一个或多个第一层链路,其中每个处理节点内的第一层链路连接相同处理节点中的一对处理单元,用于 通讯。 数据处理系统还包括多个点到点第二层链路。 所述多个第二层链路中的至少第一层连接所述第一多个处理节点中的不同处理节点中的处理单元,所述多个第二层链路中的至少一个链接连接所述第二多个处理节点中的不同处理节点中的处理单元, 并且所述多个第二层链路中的至少三分之一链路将所述第一平面中的处理单元连接到所述第二平面中的处理单元。

    Data processing system, method and interconnect fabric having a flow governor
    59.
    发明申请
    Data processing system, method and interconnect fabric having a flow governor 有权
    具有流量调节器的数据处理系统,方法和互连结构

    公开(公告)号:US20060187958A1

    公开(公告)日:2006-08-24

    申请号:US11055399

    申请日:2005-02-10

    IPC分类号: H04J3/22 H04J3/16

    摘要: A data processing system includes a plurality of local hubs each coupled to a remote hub by a respective one a plurality of point-to-point communication links. Each of the plurality of local hubs queues requests for access to memory blocks for transmission on a respective one of the point-to-point communication links to a shared resource in the remote hub. Each of the plurality of local hubs transmits requests to the remote hub utilizing only a fractional portion of a bandwidth of its respective point-to-point communication link. The fractional portion that is utilized is determined by an allocation policy based at least in part upon a number of the plurality of local hubs and a number of processing units represented by each of the plurality of local hubs. The allocation policy prevents overruns of the shared resource.

    摘要翻译: 数据处理系统包括多个本地集线器,每个集线器通过相应的一个多个点对点通信链路耦合到远程集线器。 多个本地集线器中的每一个排队对存储器块进行访问的请求,用于在到远程集线器中的共享资源的点对点通信链路中的相应一个上传输。 多个本地集线器中的每一个仅利用其相应点对点通信链路的带宽的小数部分向远程集线器发送请求。 所使用的分数部分由至少部分地基于多个本地集线器的数量和由多个本地集线器中的每一个表示的多个处理单元的分配策略确定。 分配策略可以防止超出共享资源。

    Victim cache using direct intervention
    60.
    发明申请
    Victim cache using direct intervention 失效
    受害者缓存使用直接干预

    公开(公告)号:US20060184742A1

    公开(公告)日:2006-08-17

    申请号:US11056649

    申请日:2005-02-12

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0897 G06F12/127

    摘要: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.

    摘要翻译: 一种用于实现跨层级高速缓冲存储器的干预的方法,系统和设备。 在优选实施例中,响应于第一高速缓冲存储器中的高速缓存未命中,直接干预请求从第一高速缓存存储器发送到第二高速缓存存储器,请求满足高速缓存未命中的直接干预。 在替代实施例中,利用直接干预来访问相同级别的受害者缓存。