Configuration access system
    1.
    发明授权
    Configuration access system 失效
    配置接入系统

    公开(公告)号:US6101563A

    公开(公告)日:2000-08-08

    申请号:US80031

    申请日:1998-05-15

    IPC分类号: G06F13/40 G06F13/00

    CPC分类号: G06F13/404

    摘要: A methodology and implementing system are provided in which PCI system configuration data is made available to a host X86 system CPU through an intermediate PowerPC system. A bus converter circuit connected between the X86 bus and the PowerPC bus is effective to translate configuration addresses between the X86 and the PowerPC system. A PCI host bridge arrangement includes a primary PCI host bridge circuit and a plurality of secondary peer PCI host bridge circuits. The primary host bridge circuit is effective to process configuration data requests from the bus converter circuit which are directed to any of the secondary PCI host bridge circuits.

    摘要翻译: 提供了一种方法和实现系统,其中PCI系统配置数据通过中间PowerPC系统可用于主机X86系统CPU。 连接在X86总线和PowerPC总线之间的总线转换器电路有效地在X86和PowerPC系统之间转换配置地址。 PCI主机桥机构包括主PCI主桥电路和多个次要对等PCI主桥电路。 主主机桥电路有效地处理来自总线转换器电路的配置数据请求,其指向任何辅助PCI主机桥电路。

    PCI host bridge multi-priority fairness arbiter
    2.
    发明授权
    PCI host bridge multi-priority fairness arbiter 失效
    PCI主机桥优先权公平仲裁器

    公开(公告)号:US5905877A

    公开(公告)日:1999-05-18

    申请号:US853776

    申请日:1997-05-09

    IPC分类号: G06F13/362 G06F13/14

    CPC分类号: G06F13/362

    摘要: A method and system for allowing one or more attached devices to access a computer bus. The objects of the method and system are achieved as is now described. At some particular instant in time, prioritized queues are loaded with one or more requests for access from one or more devices whose assigned priority levels correspond to the priority of the queue into which the requests for access are loaded. Requests for access, which are resident within a current queue, are preferentially granted in a sequential fashion until the current queue is emptied, after which at least one request for access from a lower in priority queue relative to the current queue is granted before responding to other requests for access, such that at least one request for access is periodically granted from a lower in priority queue relative to the current queue.

    摘要翻译: 一种用于允许一个或多个附接设备访问计算机总线的方法和系统。 方法和系统的对象如现在描述的那样实现。 在某个特定的时刻,优先级排队队列被加载有一个或多个请求,从一个或多个设备访问请求,其中一个或多个设备的分配的优先级与加载请求的队列的优先级相对应。 优先授予驻留在当前队列中的访问请求,直到当前队列被清空,然后在响应之前至少从一个较低优先级队列访问的请求相对于当前队列进行访问 其他访问请求,使得至少一个访问请求被定期地从相对于当前队列的较低优先级队列中被授予。

    Data processing system and method for efficient coherency communication utilizing coherency domain indicators
    3.
    发明授权
    Data processing system and method for efficient coherency communication utilizing coherency domain indicators 有权
    数据处理系统和方法,利用相干域指标进行有效的一致性通信

    公开(公告)号:US07774555B2

    公开(公告)日:2010-08-10

    申请号:US11835259

    申请日:2007-08-07

    IPC分类号: G06F12/00

    摘要: In a cache coherent data processing system including at least first and second coherency domains, a memory block is stored in a system memory in association with a domain indicator indicating whether or not the memory block is cached, if at all, only within the first coherency domain. A master in the first coherency domain determines whether or not a scope of broadcast transmission of an operation should extend beyond the first coherency domain by reference to the domain indicator stored in the cache and then performs a broadcast of the operation within the cache coherent data processing system in accordance with the determination.

    摘要翻译: 在包括至少第一和第二相干域的缓存相干数据处理系统中,存储器块与指示是否缓存存储器块的域指示符相关联地存储在系统存储器中,如果有的话,只有在第一一致性内 域。 第一相干域中的主设备通过参考存储在高速缓存中的域指示符来确定操作的广播传输的范围是否应超出第一相关域,然后在高速缓存相干数据处理中执行操作的广播 系统按照确定。

    Enhanced multiprocessor response bus protocol enabling intra-cache line reference exchange
    4.
    发明授权
    Enhanced multiprocessor response bus protocol enabling intra-cache line reference exchange 失效
    增强型多处理器响应总线协议,实现高速缓存行内参考交换

    公开(公告)号:US06704843B1

    公开(公告)日:2004-03-09

    申请号:US09696890

    申请日:2000-10-26

    IPC分类号: G06F1208

    CPC分类号: G06F12/0831

    摘要: System bus snoopers within a multiprocessor system in which dynamic application sequence behavior information is maintained within cache directories append the dynamic application sequence behavior information for the target cache line to their snoop responses. The system controller, which may also maintain dynamic application sequence behavior information in a history directory, employs the available dynamic application sequence behavior information to append “hints” to the combined response, appends the concatenated dynamic application sequence behavior information to the combined response, or both. Either the hints or the dynamic application sequence behavior information may be employed by the bus master and other snoopers in cache management.

    摘要翻译: 在多处理器系统内的系统总线监听器,其中动态应用程序行为信息保存在高速缓存目录中,将目标缓存行的动态应用程序序列行为信息附加到其监听响应。 也可以在历史目录中维护动态应用序列行为信息的系统控制器使用可用的动态应用序列行为信息来向组合响应附加“提示”,将连接的动态应用序列行为信息附加到组合响应,或 都。 在高速缓存管理中,总线主控和其他侦听器可以使用提示或动态应用序列行为信息。

    Multi-node data processing system having a non-hierarchical interconnect architecture
    5.
    发明授权
    Multi-node data processing system having a non-hierarchical interconnect architecture 有权
    具有非分层互连架构的多节点数据处理系统

    公开(公告)号:US06671712B1

    公开(公告)日:2003-12-30

    申请号:US09436898

    申请日:1999-11-09

    IPC分类号: G06F1516

    CPC分类号: G06F13/4217

    摘要: A data processing system includes a plurality of nodes, which each contain at least one agent, and data storage accessible to agents within the nodes. The plurality of nodes are coupled by a non-hierarchical interconnect including multiple non-blocking uni-directional address channels and at least one uni-directional data channel. The agents, which are each coupled to and snoop transactions on all of the plurality of address channels, can only issue transactions on an associated address channel. The uni-directional channels employed by the present non-hierarchical interconnect architecture permit high frequency pumped operation not possible with conventional bi-directional shared system buses. In addition, access latencies to remote (cache or main) memory incurred following local cache misses are greatly reduced as compared with conventional hierarchical systems because of the absence of inter-level (e.g., bus acquisition) communication latency. The non-hierarchical interconnect architecture also permits design flexibility in that the segment of the interconnect within each node can be independently implemented by a set of buses or as a switch, depending upon cost and performance considerations.

    摘要翻译: 数据处理系统包括多个节点,每个节点包含至少一个代理,以及节点内的代理可访问的数据存储。 多个节点通过包括多个非阻塞单向地址信道和至少一个单向数据信道的非分层互连来耦合。 在所有多个地址信道上分别耦合到并且窥探事务的代理只能在相关联的地址信道上发布事务。 当前的非分层互连架构采用的单向信道允许高频抽运操作对于传统的双向共享系统总线是不可能的。 另外,与传统分层系统相比,由于没有层间(例如,总线采集)通信延迟,与本地高速缓存未命中所产生的远程(高速缓存或主)存储器的访问延迟大大降低。 非分层互连架构还允许设计灵活性,因为根据成本和性能考虑,每个节点内的互连部分可以由一组总线或开关单独地实现。

    Multiprocessor system bus protocol with group addresses, responses, and priorities
    7.
    发明授权
    Multiprocessor system bus protocol with group addresses, responses, and priorities 有权
    具有组地址,响应和优先级的多处理器系统总线协议

    公开(公告)号:US06591321B1

    公开(公告)日:2003-07-08

    申请号:US09437200

    申请日:1999-11-09

    IPC分类号: G06F1200

    CPC分类号: G06F12/0831

    摘要: A multiprocessor system bus protocol system and method for processing and handling a processor request within a multiprocessor system having a number of bus accessible memory devices that are snooping on. at least one bus line. Snoop response groups which are groups of different types of snoop responses from the bus accessible memory devices are provided. Different transfer types are provided within each of the snoop response groups. A bus master device that provides a bus master signal is designated. The bus master device receives the processor request. One of the snoop response groups and one of the transfer types are appropriately designated based on the processor request. The bus master signal is formulated from a snoop response group, a transfer type, a valid request signal, and a cache line address. The bus master signal is sent to all of the bus accessible memory devices on the cache bus line and to a combined response logic system. All of the bus accessible memory devices on the cache bus line send snoop responses in response to the bus master signal based on the designated snoop response group. The snoop responses are sent to the combined response logic system. A combined response by the combined response logic system is determined based on the appropriate combined response encoding logic determined by the designated and latched snoop response group. The combined response is sent to all of the bus accessible memory devices on the cache bus line.

    摘要翻译: 一种用于处理和处理处理器请求的多处理器系统总线协议系统和方法,所述多处理器系统具有被窥探的多个总线可访问存储器件。 至少有一条总线。 提供了来自总线可访问存储器设备的不同类型的窥探响应的侦听响应组。 在每个窥探响应组中提供不同的传输类型。 指定提供总线主机信号的总线主设备。 总线主设备接收处理器请求。 根据处理器请求适当地指定其中一个侦听响应组和传输类型之一。 总线主机信号由侦听响应组,传输类型,有效请求信号和高速缓存线地址来制定。 总线主机信号被发送到高速缓存总线上的所有总线可访问存储器件和组合响应逻辑系统。 基于指定的窥探响应组,高速缓存总线上的所有总线可访问存储器件响应于总线主机信号发送窥探响应。 侦听响应被发送到组合的响应逻辑系统。 基于由指定和锁存的窥探响应组确定的适当的组合响应编码逻辑来确定组合响应逻辑系统的组合响应。 组合的响应被发送到高速缓存总线上的所有总线可访问存储器件。

    Multi-node data processing system and communication protocol in which a stomp signal is propagated to cancel a prior request
    8.
    发明授权
    Multi-node data processing system and communication protocol in which a stomp signal is propagated to cancel a prior request 失效
    多节点数据处理系统和传播踩踏信号以消除先前请求的通信协议

    公开(公告)号:US06519665B1

    公开(公告)日:2003-02-11

    申请号:US09436900

    申请日:1999-11-09

    IPC分类号: G06F1516

    CPC分类号: G06F15/16

    摘要: A data processing system includes at least first and second nodes and a segmented interconnect having coupled first and second segments. The first node includes the first segment and first and second agents coupled to the first segment, and the second node includes the second segment and a third agent coupled to the second segment. The first node further includes cancellation logic that, in response to the first agent issuing a request on the segmented interconnect that propagates from the first segment to the second segment and the second agent indicating ability to service the request, sends a cancellation message to the third agent instructing the third agent to ignore the request.

    摘要翻译: 数据处理系统至少包括第一和第二节点以及具有耦合的第一和第二段的分段互连。 第一节点包括第一段和耦合到第一段的第一和第二代理,并且第二节点包括第二段和耦合到第二段的第三代理。 第一节点还包括消除逻辑,响应于第一代理在从第一段传播到第二段的分段互连上发出请求,并且第二代理指示服务该请求的能力,向第三代发送取消消息 代理指示第三代理人忽略该请求。

    Multiprocessor system in which a cache serving as a highest point of coherency is indicated by a snoop response
    9.
    发明授权
    Multiprocessor system in which a cache serving as a highest point of coherency is indicated by a snoop response 失效
    多处理器系统,其中作为最高点的一致性的缓存由窥探响应指示

    公开(公告)号:US06405289B1

    公开(公告)日:2002-06-11

    申请号:US09437196

    申请日:1999-11-09

    IPC分类号: G06F1200

    摘要: A method of maintaining cache coherency, by designating one cache that owns a line as a highest point of coherency (HPC) for a particular memory block, and sending a snoop response from the cache indicating that it is currently the HPC for the memory block and can service a request. The designation may be performed in response to a particular coherency state assigned to the cache line, or based on the setting of a coherency token bit for the cache line. The processing units may be grouped into clusters, while the memory is distributed using memory arrays associated with respective clusters. One memory array is designated as the lowest point of coherency (LPC) for the memory block (i.e., a fixed assignment) while the cache designated as the HPC is dynamic (i.e., changes as different caches gain ownership of the line). An acknowledgement snoop response is sent from the LPC memory array, and a combined response is returned to the requesting device which gives priority to the HPC snoop response over the LPC snoop response.

    摘要翻译: 通过将一个具有一行的高速缓存指定为特定存储器块的最高一致性(HPC),以及从高速缓存指示其当前是存储器块的HPC的高速缓存发送侦听响应的方法来维持高速缓存一致性的方法,以及 可以服务请求。 可以响应于分配给高速缓存行的特定一致性状态,或者基于高速缓存行的相关性令牌位的设置来执行指定。 处理单元可以被分组成群集,而存储器是使用与相应簇相关联的存储器阵列分布的。 一个存储器阵列被指定为存储器块的一致性(LPC)的最低点(即,固定分配),而指定为HPC的缓存是动态的(即,随着不同的高速缓存获得线的所有权而改变)。 从LPC存储器阵列发送确认窥探响应,并且将组合的响应返回给请求设备,该请求设备通过LPC窥探响应优先考虑HPC侦听响应。

    Optimized cache allocation algorithm for multiple speculative requests
    10.
    发明授权
    Optimized cache allocation algorithm for multiple speculative requests 失效
    针对多个推测请求的优化缓存分配算法

    公开(公告)号:US06393528B1

    公开(公告)日:2002-05-21

    申请号:US09345714

    申请日:1999-06-30

    IPC分类号: G06F1200

    CPC分类号: G06F12/0862 G06F12/127

    摘要: A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.

    摘要翻译: 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器hiearchy请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已被满足,则包含先前预取值中的一个的高速缓存行中的高速缓存行被分配用于接收另一个预取值 。