Data processing system and method for efficient communication utilizing an Tn and Ten coherency states
    1.
    发明申请
    Data processing system and method for efficient communication utilizing an Tn and Ten coherency states 有权
    数据处理系统和利用Tn和10相​​关性状态的高效通信方法

    公开(公告)号:US20080040556A1

    公开(公告)日:2008-02-14

    申请号:US11835984

    申请日:2007-08-08

    IPC分类号: G06F12/08

    摘要: A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit. The first coherency domain includes a first cache memory and a second cache memory, and the second coherency domain includes a remote coherent cache memory. The first cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the memory block is possibly shared with the second cache memory in the first coherency domain and cached only within the first coherency domain.

    摘要翻译: 高速缓存一致数据处理系统至少包括第一和第二相关域,每个域包括至少一个处理单元。 第一相关域包括第一高速缓存存储器和第二高速缓冲存储器,并且第二相干域包括远程一致高速缓存存储器。 第一高速缓存存储器包括高速缓存控制器,包括用于高速缓存存储器块的数据存储位置的数据阵列和高速缓存目录。 缓存目录包括用于存储与存储器块相关联的地址标签的标签字段和与标签字段和数据存储位置相关联的一致性状态字段。 相关性状态字段具有多个可能的状态,包括指示存储器块可能与第一相关域中的第二高速缓冲存储器共享并且仅在第一相干域内缓存的状态。

    Data processing system and method for predictively selecting a scope of broadcast of an operation utilizing a location of a memory
    4.
    发明申请
    Data processing system and method for predictively selecting a scope of broadcast of an operation utilizing a location of a memory 有权
    数据处理系统和方法,用于使用存储器的位置预测性地选择操作的广播范围

    公开(公告)号:US20060179249A1

    公开(公告)日:2006-08-10

    申请号:US11055697

    申请日:2005-02-10

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A cache coherent data processing system includes a memory and at least first and second coherency domains that each include a respective one of first and second cache memories. A master in the first coherency domain selects a scope of an initial broadcast of an operation targeting a request address allocated to the memory from among a first scope including only the first coherency domain and a second scope including both the first and second coherency domains. The master selects the scope based, at least in part, upon whether the memory belongs to the first coherency domain and performs an initial broadcast of the operation within the cache coherent data processing system utilizing the selected scope.

    摘要翻译: 高速缓存一致数据处理系统包括存储器和至少第一和第二一致性域,每个域包括第一和第二高速缓存存储器中的相应一个。 第一相干域中的主机从仅包括第一相关域的第一范围和包括第一和第二相干域的第二范围中选择针对分配给存储器的请求地址的操作的初始广播的范围。 主设备至少部分地基于所述存储器是否属于第一相关域并且使用所选择的范围来执行在高速缓存相干数据处理系统内的操作的初始广播来选择所述范围。

    Reducing number of rejected snoop requests by extending time to respond to snoop request

    公开(公告)号:US20060184748A1

    公开(公告)日:2006-08-17

    申请号:US11056740

    申请日:2005-02-11

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.

    Reducing Number of Rejected Snoop Requests By Extending Time to Respond to Snoop Request
    6.
    发明申请
    Reducing Number of Rejected Snoop Requests By Extending Time to Respond to Snoop Request 失效
    通过延长响应Snoop请求的时间来减少被拒绝的侦听请求数

    公开(公告)号:US20070294486A1

    公开(公告)日:2007-12-20

    申请号:US11847941

    申请日:2007-08-30

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。

    Cache memory, processing unit, data processing system and method for filtering snooped operations
    8.
    发明申请
    Cache memory, processing unit, data processing system and method for filtering snooped operations 有权
    缓存存储器,处理单元,数据处理系统和过滤窥探操作的方法

    公开(公告)号:US20060179244A1

    公开(公告)日:2006-08-10

    申请号:US11055418

    申请日:2005-02-10

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0817 G06F12/0831

    摘要: A cache coherent data processing system includes at least a first cache memory supporting a first processing unit and a second cache memory supporting a second processing unit. The first cache memory includes a cache array and a cache directory of contents of the cache array. In response to the first cache memory detecting on an interconnect a broadcast operation that specifies a request address, the first cache memory determines from the operation a type of the operation and a coherency state associated with the request address. In response to determining the type and the coherency state, the first cache memory filters out the broadcast operation without accessing the cache directory.

    摘要翻译: 高速缓存一致数据处理系统至少包括支持第一处理单元的第一高速缓冲存储器和支持第二处理单元的第二高速缓冲存储器。 第一缓存存储器包括缓存阵列和高速缓存阵列的内容的高速缓存目录。 响应于第一高速缓冲存储器在互连上检测指定请求地址的广播操作,第一高速缓冲存储器从操作中确定与请求地址相关联的操作类型和一致性状态。 响应于确定类型和一致性状态,第一高速缓存存储器过滤掉广播操作而不访问高速缓存目录。

    Reducing Number of Rejected Snoop Requests By Extending Time To Respond To Snoop Request
    9.
    发明申请
    Reducing Number of Rejected Snoop Requests By Extending Time To Respond To Snoop Request 失效
    通过延长响应Snoop请求的时间减少被拒绝的侦听请求数

    公开(公告)号:US20080077744A1

    公开(公告)日:2008-03-27

    申请号:US11950717

    申请日:2007-12-05

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。

    Data processing system, cache system and method for scrubbing a domain indication in response to execution of program code
    10.
    发明申请
    Data processing system, cache system and method for scrubbing a domain indication in response to execution of program code 有权
    数据处理系统,缓存系统和用于响应于程序代码的执行来擦除域指示的方法

    公开(公告)号:US20060271741A1

    公开(公告)日:2006-11-30

    申请号:US11136642

    申请日:2005-05-24

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: In response to execution of program code, a control register within scrubbing logic in a local coherency domain is initialized with at least a target address of a target memory block. In response to the initialization, the scrubbing logic issues to at least one cache hierarchy in a remote coherency domain a domain indication scrubbing request targeting a target memory block that may be cached by the at least one cache hierarchy. In response to receipt of a coherency response indicating that the target memory block is not cached in the remote coherency domain, a domain indication in the local coherency domain is updated to indicate that the target memory block is cached, if at all, only within the local coherency domain.

    摘要翻译: 响应于程序代码的执行,用至少目标存储器块的目标地址初始化局部一致性域内的擦除逻辑中的控制寄存器。 响应于初始化,擦除逻辑向远程一致性域中的至少一个高速缓存层级发出针对可由所述至少一个高速缓存层级缓存的目标存储器块的域指示擦除请求。 响应于接收到指示目标存储器块未被缓存在远程一致性域中的一致性响应,本地一致性域中的域指示被更新以指示目标存储器块被缓存,如果完全只在 局部一致性域。