Dynamic disablement of a transaction ordering in response to an error
    51.
    发明授权
    Dynamic disablement of a transaction ordering in response to an error 失效
    响应错误动态禁用事务排序

    公开(公告)号:US6163815A

    公开(公告)日:2000-12-19

    申请号:US85197

    申请日:1998-05-27

    IPC分类号: G06F3/00 G06F13/40

    CPC分类号: G06F13/4027

    摘要: An apparatus and method of transmitting data from a PCI 2.1 compliant device is provided. PCI devices (i.e., PCI bridges) designed in accordance with the 2.1 PCI specification have a load data ordering feature that prohibits load data to bypass DMA write data in the bridge. The present apparatus and method allow load data to bypass DMA write data in the PCI bridge if the bridge is in an error state.

    摘要翻译: 提供了一种从PCI 2.1兼容设备发送数据的设备和方法。 根据2.1 PCI规范设计的PCI设备(即PCI桥)具有禁止负载数据绕过桥中的DMA写入数据的负载数据排序特征。 如果桥接器处于错误状态,本装置和方法允许负载数据绕过PCI桥中的DMA写入数据。

    High performance data processing system via cache victimization protocols
    52.
    发明授权
    High performance data processing system via cache victimization protocols 失效
    高性能数据处理系统通过缓存受害协议

    公开(公告)号:US06721853B2

    公开(公告)日:2004-04-13

    申请号:US09895232

    申请日:2001-06-29

    IPC分类号: G06F1208

    CPC分类号: G06F12/0813

    摘要: A cache controller for a processor in a remote node of a system bus in a multiway multiprocessor link sends out a cache deallocate address transaction (CDAT) for a given cache line when that cache line is flushed and information from memory in a home node is no longer deemed valid for that cache line of that remote node processor. A local snoop of that CDAT transaction is then performed as a background function by other processors in the same remote node. If the snoop results indicate that same information is valid in another cache, and that cache decides it better to keep it valid in that remote node, then the information remains there. If the snoop results indicate that the information is not valid among caches in that remote node, or will be flushed due to the CDAT, the system memory directory in the home node of the multiprocessor link is notified and changes state in response to this. The system has higher performance due to the cache line maintenance functions being performed in the background rather than based on mainstream demand.

    摘要翻译: 用于多路多处理器链路中的系统总线的远程节点中的处理器的高速缓存控制器在刷新该高速缓存行并且来自主节点中的存储器的信息为否的时候发送用于给定高速缓存行的缓存解除分配地址事务(CDAT) 较长时间被认为对该远程节点处理器的该缓存行有效。 然后,该同一远程节点中的其他处理器将执行该CDAT事务的本地侦听作为后台功能。 如果窥探结果表明相同的信息在另一个缓存中有效,并且该缓存决定更好地将其保留在该远程节点中,则该信息将保留在该位置。 如果窥探结果表明信息在该远程节点的高速缓存中无效,或由于CDAT而被刷新,则通知多处理器链路的家庭节点中的系统内存目录并响应于此改变状态。 该系统具有更高的性能,因为高速缓存行维护功能在后台执行,而不是基于主流需求。

    Intelligent cache management mechanism via processor access sequence analysis
    53.
    发明授权
    Intelligent cache management mechanism via processor access sequence analysis 失效
    智能缓存管理机制通过处理器访问序列分析

    公开(公告)号:US06629210B1

    公开(公告)日:2003-09-30

    申请号:US09696888

    申请日:2000-10-26

    IPC分类号: G06F1208

    CPC分类号: G06F12/121 G06F12/0815

    摘要: In addition to an address tag, a coherency state and an LRU position, each cache directory entry includes historical processor access information for the corresponding cache line. The historical processor access information includes different subentries for each different processor which has accessed the corresponding cache line, with subentries being “pushed” along the stack when a new processor accesses the subject cache line. Each subentries contains the processor identifier for the corresponding processor which accessed the cache line, one or more opcodes identifying the operations which were performed by the processor, and timestamps associated with each opcode. This historical processor access information may then be utilized by the cache controller to influence victim selection, coherency state transitions, LRU state transitions, deallocation timing, and other cache management functions so that smaller caches are given the effectiveness of very large caches through more intelligent cache management.

    摘要翻译: 除了地址标签,一致性状态和LRU位置之外,每个高速缓存目录条目包括对应的高速缓存行的历史处理器访问信息。 历史处理器访问信息包括已经访问相应的高速缓存行的每个不同处理器的不同子条目,当新处理器访问对象高速缓存行时,子条目沿“栈”被“推送”。 每个子条目包含访问高速缓存行的相应处理器的处理器标识符,标识由处理器执行的操作的一个或多个操作码以及与每个操作码相关联的时间戳。 然后,该历史处理器访问信息可以由高速缓存控制器利用来影响受害者选择,一致性状态转换,LRU状态转换,解除分配定时和其他高速缓存管理功能,使得通过更智能高速缓存向更小的高速缓存提供非常大的高速缓存的有效性 管理。

    Method and apparatus of selecting data transmission channels
    54.
    发明授权
    Method and apparatus of selecting data transmission channels 失效
    选择数据传输通道的方法和装置

    公开(公告)号:US6049841A

    公开(公告)日:2000-04-11

    申请号:US85196

    申请日:1998-05-27

    IPC分类号: G06F13/30

    CPC分类号: G06F13/30

    摘要: An apparatus and method of assigning communication channels for transmitting data through a host bridge are provided. In a preferred embodiment, a determination is made as to whether data is being transmitted through any one of the channels. If data is not being transmitted through one the channels, that channel is designated as the transmission channel for the present data transaction. If data is being transmitted through all of the channels, a least most recently used channel is selected as the data transmission channel. If however, more than one channel is not transmitting data, the data transmission channel assignments are made among the idle channels from a lowest channel number (e.g., channel 0) to a highest channel number (e.g., channel 7) or vice versa.

    摘要翻译: 提供了一种通过主机桥分配用于发送数据的通信信道的装置和方法。 在优选实施例中,确定数据是否通过任何一个信道发送。 如果数据没有通过一个通道发送,则该信道被指定为用于当前数据事务的传输信道。 如果通过所有信道发送数据,则选择最不常用的信道作为数据传输信道。 然而,如果多于一个信道不发送数据,则从最低信道号(例如,信道0)到最高信道号(例如,信道7)在空闲信道之间进行数据传输信道分配,反之亦然。

    Modified-invalid cache state to reduce cache-to-cache data transfer operations for speculatively-issued full cache line writes
    55.
    发明授权
    Modified-invalid cache state to reduce cache-to-cache data transfer operations for speculatively-issued full cache line writes 失效
    修改的无效缓存状态,以减少用于推测发出的全缓存行写入的缓存到高速缓存数据传输操作

    公开(公告)号:US07284097B2

    公开(公告)日:2007-10-16

    申请号:US10675744

    申请日:2003-09-30

    IPC分类号: G06F12/00

    摘要: A cache coherency protocol that includes a modified-invalid (Mi) state, which enables execution of a DMA Claim or DClaim operation to assign sole ownership of a cache line to a device that is going to overwrite the entire cache line without cache-to-cache data transfer. The protocol enables completion of speculatively-issued full cache line writes without requiring cache-to-cache transfer of data on the data bus during a preceding DMA Claim or DClaim operation. The modified-invalid (Mi) state assigns sole ownership of the cache line to an I/O device that has speculatively-issued a DMA Write or a processor that has speculatively-issued a DCBZ operation to overwrite the entire cache line, and the Mi state prevents data being sent to the cache line from another cache since the data will most probably be overwritten.

    摘要翻译: 包括修改无效(Mi)状态的高速缓存一致性协议,其使得能够执行DMA声明或DClaim操作以将高速缓存行的唯一所有权分配给要覆盖整个高速缓存行的设备,而不进行高速缓存 - 缓存数据传输。 该协议允许完成推测发出的完整高速缓存行写入,而不需要在先前的DMA声明或DClaim操作期间在数据总线上缓存到高速缓存传输数据。 修改无效(Mi)状态将高速缓存行的唯一所有权分配给推测性地发出DMA写入的I / O设备或者推测发出DCBZ操作以覆盖整个高速缓存行的处理器,并且将Mi 状态可防止将数据从另一个缓存发送到高速缓存行,因为数据最有可能被覆盖。

    Multi-node data processing system and communication protocol that route write data utilizing a destination ID obtained from a combined response
    56.
    发明授权
    Multi-node data processing system and communication protocol that route write data utilizing a destination ID obtained from a combined response 失效
    多节点数据处理系统和使用从组合响应获得的目的地ID来路由写入数据的通信协议

    公开(公告)号:US06848003B1

    公开(公告)日:2005-01-25

    申请号:US09436901

    申请日:1999-11-09

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A data processing system includes a plurality of nodes, which each contain at least one agent and each have an associated node identifier, and memory distributed among the plurality of nodes. The data processing system further includes an interconnect containing a segmented data channel, where each node contains a segment of the segmented data channel and each segment is coupled to at least one other segment by destination logic. In response to snooping a write request of a master agent on the interconnect, a target agent that will service the write request places its node identifier in a snoop response. When the master agent receives the combined response, which contains the node identifier of the target agent, the master agent issues on the segmented data channel a write data transaction specifying the node identifier of the target agent as a destination identifier. In response to receipt of the write data transaction, the destination logic transmits the write data transaction to a next segment only if the destination identifier does not match a node identifier associated with a node containing a current segment.

    摘要翻译: 数据处理系统包括多个节点,每个节点包含至少一个代理,并且每个节点都具有相关联的节点标识符,以及分布在多个节点之间的存储器。 数据处理系统还包括一个包含分段数据信道的互连,其中每个节点包含分段数据信道的一个段,并且每个段通过目的地逻辑耦合到至少一个其它段。 响应于在互连上窥探主代理的写请求,将服务于写请求的目标代理将其节点标识符置于窥探响应中。 当主代理接收到包含目标代理的节点标识符的组合响应时,主代理在分段数据信道上发出指定目标代理的节点标识符的写数据事务作为目的地标识符。 响应于写入数据事务的接收,目的地逻辑仅在目的地标识符与与包含当前段的节点相关联的节点标识符不匹配时才将写入数据事务发送到下一个段。

    Multiprocessor system snoop scheduling mechanism for limited bandwidth snoopers

    公开(公告)号:US06546469B2

    公开(公告)日:2003-04-08

    申请号:US09749328

    申请日:2001-03-12

    IPC分类号: G06F1200

    CPC分类号: G06F12/0831

    摘要: A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized. The invention is not limited to any particular type of instruction, and the synchronization functionality may be hardware or software programmable.