Data processing system and method for efficient coherency communication utilizing coherency domains
    2.
    发明授权
    Data processing system and method for efficient coherency communication utilizing coherency domains 失效
    数据处理系统和方法,利用一致性域进行有效的一致性通信

    公开(公告)号:US08214600B2

    公开(公告)日:2012-07-03

    申请号:US11055402

    申请日:2005-02-10

    IPC分类号: G06F12/00

    摘要: In a cache coherent data processing system including at least first and second coherency domains, a master performs a first broadcast of an operation within the cache coherent data processing system that is limited in scope of transmission to the first coherency domain. The master receives a response of the first coherency domain to the first broadcast of the operation. If the response indicates the operation cannot be serviced in the first coherency domain alone, the master increases the scope of transmission by performing a second broadcast of the operation in both the first and second coherency domains. If the response indicates the operation can be serviced in the first coherency domain, the master refrains from performing the second broadcast.

    摘要翻译: 在包括至少第一和第二相干域的高速缓存相干数据处理系统中,主器件在高速缓存相干数据处理系统内进行第一广播,其被限制在传输范围到第一相干域。 主机接收第一个一致性域的响应到该操作的第一次广播。 如果响应指示仅在第一个相干域中不能进行操作,则主设备通过在第一和第二相干域中执行操作的第二次广播来增加传输的范围。 如果响应指示可以在第一相干域中服务操作,则主机不执行第二广播。

    Efficient coherency communication utilizing an IG coherency state
    3.
    发明授权
    Efficient coherency communication utilizing an IG coherency state 失效
    使用IG一致性状态的高效一致性通信

    公开(公告)号:US07783841B2

    公开(公告)日:2010-08-24

    申请号:US11836965

    申请日:2007-08-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit and a cache memory. The cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the address tag is valid, that the storage location does not contain valid data, and that the memory block is possibly cached outside of the first coherency domain.

    摘要翻译: 高速缓存一致数据处理系统至少包括第一和第二相关域,每个域包括至少一个处理单元和高速缓冲存储器。 高速缓存存储器包括高速缓存控制器,包括用于高速缓存存储器块的数据存储位置的数据阵列和高速缓存目录。 缓存目录包括用于存储与存储器块相关联的地址标签的标签字段和与标签字段和数据存储位置相关联的一致性状态字段。 一致性状态字段具有多个可能的状态,包括指示地址标签有效的状态,存储位置不包含有效数据,并且存储器块可能被高速缓存在第一相干域之外。

    Data processing system and method for efficient coherency communication utilizing coherency domain indicators
    4.
    发明授权
    Data processing system and method for efficient coherency communication utilizing coherency domain indicators 有权
    数据处理系统和方法,利用相干域指标进行有效的一致性通信

    公开(公告)号:US08230178B2

    公开(公告)日:2012-07-24

    申请号:US11055483

    申请日:2005-02-10

    IPC分类号: G06F12/00

    摘要: In a cache coherent data processing system including at least first and second coherency domains, a memory block is stored in a system memory in association with a domain indicator indicating whether or not the memory block is cached, if at all, only within the first coherency domain. A master in the first coherency domain determines whether or not a scope of broadcast transmission of an operation should extend beyond the first coherency domain by reference to the domain indicator stored in the cache and then performs a broadcast of the operation within the cache coherent data processing system in accordance with the determination.

    摘要翻译: 在包括至少第一和第二相干域的缓存相干数据处理系统中,存储器块与指示是否缓存存储器块的域指示符相关联地存储在系统存储器中,如果有的话,只有在第一一致性内 域。 第一相干域中的主设备通过参考存储在高速缓存中的域指示符来确定操作的广播传输的范围是否应超出第一相关域,然后在高速缓存相干数据处理中执行操作的广播 系统按照确定。

    Data processing system and method for efficient communication utilizing an Ig coherency state
    5.
    发明授权
    Data processing system and method for efficient communication utilizing an Ig coherency state 失效
    数据处理系统和利用Ig一致性状态的高效通信方法

    公开(公告)号:US07584329B2

    公开(公告)日:2009-09-01

    申请号:US11055524

    申请日:2005-02-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit and a cache memory. The cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the address tag is valid, that the storage location does not contain valid data, and that the memory block is possibly cached outside of the first coherency domain.

    摘要翻译: 高速缓存一致数据处理系统至少包括第一和第二相关域,每个域包括至少一个处理单元和高速缓冲存储器。 高速缓存存储器包括高速缓存控制器,包括用于高速缓存存储器块的数据存储位置的数据阵列和高速缓存目录。 缓存目录包括用于存储与存储器块相关联的地址标签的标签字段和与标签字段和数据存储位置相关联的一致性状态字段。 一致性状态字段具有多个可能的状态,包括指示地址标签有效的状态,存储位置不包含有效数据,并且存储器块可能被高速缓存在第一相干域之外。

    Data processing system and method for efficient coherency communication utilizing coherency domain indicators
    6.
    发明授权
    Data processing system and method for efficient coherency communication utilizing coherency domain indicators 有权
    数据处理系统和方法,利用相干域指标进行有效的一致性通信

    公开(公告)号:US07774555B2

    公开(公告)日:2010-08-10

    申请号:US11835259

    申请日:2007-08-07

    IPC分类号: G06F12/00

    摘要: In a cache coherent data processing system including at least first and second coherency domains, a memory block is stored in a system memory in association with a domain indicator indicating whether or not the memory block is cached, if at all, only within the first coherency domain. A master in the first coherency domain determines whether or not a scope of broadcast transmission of an operation should extend beyond the first coherency domain by reference to the domain indicator stored in the cache and then performs a broadcast of the operation within the cache coherent data processing system in accordance with the determination.

    摘要翻译: 在包括至少第一和第二相干域的缓存相干数据处理系统中,存储器块与指示是否缓存存储器块的域指示符相关联地存储在系统存储器中,如果有的话,只有在第一一致性内 域。 第一相干域中的主设备通过参考存储在高速缓存中的域指示符来确定操作的广播传输的范围是否应超出第一相关域,然后在高速缓存相干数据处理中执行操作的广播 系统按照确定。

    System bus structure for large L2 cache array topology with different latency domains
    7.
    发明授权
    System bus structure for large L2 cache array topology with different latency domains 失效
    具有不同延迟域的大二级缓存阵列拓扑的系统总线结构

    公开(公告)号:US07469318B2

    公开(公告)日:2008-12-23

    申请号:US11054925

    申请日:2005-02-10

    IPC分类号: G06F12/00

    摘要: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set. Two address busses (one for each data bus set) are used to receive successive address tags that identify which portions of the requested memory values are being received from each data bus set. For example, the requested memory values may be 32 bytes each, and the separate portions of the requested memory values are received over four successive cycles with an 8-byte portion of each value received each cycle. The cache lines are spread across different cache sectors of the cache memory, wherein the cache sectors have different output latencies, and the separate portions of a given requested memory value are loaded sequentially into the corresponding cache sectors based on their respective output latencies. Merge flow circuits responsive to the cache controller are used to receive the portions of a requested memory value and input those bytes into the cache sector.

    摘要翻译: 一种高速缓冲存储器,其通过在连续时钟周期的第一时间间隔内从第一数据总线接收第一请求存储器值的分开的部分来将两个存储器值加载到两个高速缓存行中,并且从第二数据接收第二请求存储器值的分离部分 总线与第一时间跨度重叠的连续时钟周期的第二时间跨度。 在说明性实施例中,第一输入线用于加载第一高速缓存行的第一字节数组和第二高速缓存行的第一字节数组,第二输入行用于加载第一高速缓存的第二字节数组 线和第二高速缓存线的第二字节阵列,并且第一和第二存储器值的分离部分的传输在第一和第二数据总线之间交错。 第一数据总线可以是第一数据总线组中的多个数据总线之一,并且第二数据总线可以是第二数据总线组中的多个数据总线中的一个。 两个地址总线(每个数据总线集合一个)用于接收连续的地址标签,其识别从每个数据总线组接收到所请求的存储器值的哪些部分。 例如,所请求的存储器值可以是每个32个字节,并且所请求的存储器值的分开的部分在四个连续周期中被接收,每个周期接收每个值的8字节部分。 高速缓存行分布在高速缓冲存储器的不同高速缓存扇区上,其中高速缓存扇区具有不同的输出延迟,并且给定请求的存储器值的分离部分基于它们各自的输出延迟顺序地加载到相应的高速缓存扇区中。 响应于高速缓存控制器的合并流回路用于接收请求的存储器值的部分并将这些字节输入高速缓存扇区。

    PIPELINING D STATES FOR MRU STEERAGE DURING MRU-LRU MEMBER ALLOCATION
    8.
    发明申请
    PIPELINING D STATES FOR MRU STEERAGE DURING MRU-LRU MEMBER ALLOCATION 有权
    在MRU-LRU会员分配期间管理MRU的管理状态

    公开(公告)号:US20080244187A1

    公开(公告)日:2008-10-02

    申请号:US12118238

    申请日:2008-05-09

    IPC分类号: G06F12/08

    摘要: A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.

    摘要翻译: 用于在LRU受害者选择期间防止选择被删除(D)成员作为LRU受害者的方法和装置。 在针对特定同余类的每个缓存访问期间,从高速缓存目录中的信息识别已删除的高速缓存行。 删除的高速缓存行的位置在LRU受害者选择期间通过高速缓存架构流水线化。 信息被锁存,然后传递给MRU向量生成逻辑。 生成MRU向量并将其传递给MRU更新逻辑,MRU更新逻辑是将删除的成员作为MRU成员进行选择/标记。 使MRU操作仅影响以基于树的结构状态位布置的较低级LRU状态位,使得MRU操作仅在D状态下否定特定成员的选择,而不影响其他成员的LRU受害者选择。

    Cache member protection with partial make MRU allocation
    9.
    发明授权
    Cache member protection with partial make MRU allocation 失效
    缓存成员保护部分使MRU分配

    公开(公告)号:US07363433B2

    公开(公告)日:2008-04-22

    申请号:US11054390

    申请日:2005-02-09

    IPC分类号: G06F12/00

    摘要: A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.

    摘要翻译: 一种用于在LRU受害者选择期间能够保护缓存的特定成员的方法和装置。 LRU状态阵列除了状态位之外还包括额外的“保护”位。 保护位用作用于标识要保护的同余类的成员的位置的指针。 在标准LRU受害者选择期间,保护成员不会从缓存中删除,除非该成员无效。 保护位被流水线到MRU更新逻辑,它们用于生成MRU向量。 由MRU向量(和指针)标识的特定成员不被选择作为下一个LRU受害者,除非成员无效。 使MRU操作仅影响布置了基于树的结构的较低级LRU状态位,并且因此仅在不影响其他成员的LRU受害者选择的情况下,否定受保护成员的选择。

    Cache allocation mechanism for biasing subsequent allocations based upon cache directory state
    10.
    发明授权
    Cache allocation mechanism for biasing subsequent allocations based upon cache directory state 失效
    缓存分配机制,用于基于缓存目录状态偏置后续分配

    公开(公告)号:US07103721B2

    公开(公告)日:2006-09-05

    申请号:US10425459

    申请日:2003-04-28

    IPC分类号: G06F12/12

    CPC分类号: G06F12/0891 G06F12/0817

    摘要: An improved method and apparatus for selecting invalid members as victims in a least recently used cache system. An invalid cache line selection unit has an input connected to a cache directory and an output connected to a most recently used update logic. In response to a miss in the cache, an invalid cache line is identified from information in the cache directory by the invalid cache line selection unit. This invalid cache line is updated to be the next victim by the most recently used update logic, rather than attempting to override the current victim selection by a least recently used victim selection logic. The next victim also may be selected in response to a cache hit in which information from the cache directory also is read.

    摘要翻译: 一种用于在最近最少使用的缓存系统中选择无效成员作为受害者的改进的方法和装置。 无效的高速缓存线选择单元具有连接到高速缓存目录的输入和连接到最近使用的更新逻辑的输出。 响应于高速缓存中的未命中,通过无效高速缓存线选择单元从高速缓存目录中的信息识别无效的高速缓存行。 该无效高速缓存行被最新使用的更新逻辑更新为下一个受害者,而不是尝试通过最近最少使用的受害者选择逻辑覆盖当前的受害者选择。 还可以响应于其中读取来自高速缓存目录的信息的缓存命中来选择下一个受害者。