摘要:
A methodology and implementing system are provided in which PCI system configuration data is made available to a host X86 system CPU through an intermediate PowerPC system. A bus converter circuit connected between the X86 bus and the PowerPC bus is effective to translate configuration addresses between the X86 and the PowerPC system. A PCI host bridge arrangement includes a primary PCI host bridge circuit and a plurality of secondary peer PCI host bridge circuits. The primary host bridge circuit is effective to process configuration data requests from the bus converter circuit which are directed to any of the secondary PCI host bridge circuits.
摘要:
A method and system for allowing one or more attached devices to access a computer bus. The objects of the method and system are achieved as is now described. At some particular instant in time, prioritized queues are loaded with one or more requests for access from one or more devices whose assigned priority levels correspond to the priority of the queue into which the requests for access are loaded. Requests for access, which are resident within a current queue, are preferentially granted in a sequential fashion until the current queue is emptied, after which at least one request for access from a lower in priority queue relative to the current queue is granted before responding to other requests for access, such that at least one request for access is periodically granted from a lower in priority queue relative to the current queue.
摘要:
In a cache coherent data processing system including at least first and second coherency domains, a memory block is stored in a system memory in association with a domain indicator indicating whether or not the memory block is cached, if at all, only within the first coherency domain. A master in the first coherency domain determines whether or not a scope of broadcast transmission of an operation should extend beyond the first coherency domain by reference to the domain indicator stored in the cache and then performs a broadcast of the operation within the cache coherent data processing system in accordance with the determination.
摘要:
System bus snoopers within a multiprocessor system in which dynamic application sequence behavior information is maintained within cache directories append the dynamic application sequence behavior information for the target cache line to their snoop responses. The system controller, which may also maintain dynamic application sequence behavior information in a history directory, employs the available dynamic application sequence behavior information to append “hints” to the combined response, appends the concatenated dynamic application sequence behavior information to the combined response, or both. Either the hints or the dynamic application sequence behavior information may be employed by the bus master and other snoopers in cache management.
摘要:
A data processing system includes a plurality of nodes, which each contain at least one agent, and data storage accessible to agents within the nodes. The plurality of nodes are coupled by a non-hierarchical interconnect including multiple non-blocking uni-directional address channels and at least one uni-directional data channel. The agents, which are each coupled to and snoop transactions on all of the plurality of address channels, can only issue transactions on an associated address channel. The uni-directional channels employed by the present non-hierarchical interconnect architecture permit high frequency pumped operation not possible with conventional bi-directional shared system buses. In addition, access latencies to remote (cache or main) memory incurred following local cache misses are greatly reduced as compared with conventional hierarchical systems because of the absence of inter-level (e.g., bus acquisition) communication latency. The non-hierarchical interconnect architecture also permits design flexibility in that the segment of the interconnect within each node can be independently implemented by a set of buses or as a switch, depending upon cost and performance considerations.
摘要:
System bus masters within a multiprocessor system in which dynamic application sequence behavior information is maintained within cache directories append the historical access information for the target cache line to their requests. Snoopers and/or the system controller, which may also maintain dynamic application sequence behavior information in a history directory, employ the appended access information for cache management.
摘要:
A multiprocessor system bus protocol system and method for processing and handling a processor request within a multiprocessor system having a number of bus accessible memory devices that are snooping on. at least one bus line. Snoop response groups which are groups of different types of snoop responses from the bus accessible memory devices are provided. Different transfer types are provided within each of the snoop response groups. A bus master device that provides a bus master signal is designated. The bus master device receives the processor request. One of the snoop response groups and one of the transfer types are appropriately designated based on the processor request. The bus master signal is formulated from a snoop response group, a transfer type, a valid request signal, and a cache line address. The bus master signal is sent to all of the bus accessible memory devices on the cache bus line and to a combined response logic system. All of the bus accessible memory devices on the cache bus line send snoop responses in response to the bus master signal based on the designated snoop response group. The snoop responses are sent to the combined response logic system. A combined response by the combined response logic system is determined based on the appropriate combined response encoding logic determined by the designated and latched snoop response group. The combined response is sent to all of the bus accessible memory devices on the cache bus line.
摘要:
A data processing system includes at least first and second nodes and a segmented interconnect having coupled first and second segments. The first node includes the first segment and first and second agents coupled to the first segment, and the second node includes the second segment and a third agent coupled to the second segment. The first node further includes cancellation logic that, in response to the first agent issuing a request on the segmented interconnect that propagates from the first segment to the second segment and the second agent indicating ability to service the request, sends a cancellation message to the third agent instructing the third agent to ignore the request.
摘要:
A method of maintaining cache coherency, by designating one cache that owns a line as a highest point of coherency (HPC) for a particular memory block, and sending a snoop response from the cache indicating that it is currently the HPC for the memory block and can service a request. The designation may be performed in response to a particular coherency state assigned to the cache line, or based on the setting of a coherency token bit for the cache line. The processing units may be grouped into clusters, while the memory is distributed using memory arrays associated with respective clusters. One memory array is designated as the lowest point of coherency (LPC) for the memory block (i.e., a fixed assignment) while the cache designated as the HPC is dynamic (i.e., changes as different caches gain ownership of the line). An acknowledgement snoop response is sent from the LPC memory array, and a combined response is returned to the requesting device which gives priority to the HPC snoop response over the LPC snoop response.
摘要:
A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.