摘要:
A multiprocessor system bus protocol system and method for processing and handling a processor request within a multiprocessor system having a number of bus accessible memory devices that are snooping on. at least one bus line. Snoop response groups which are groups of different types of snoop responses from the bus accessible memory devices are provided. Different transfer types are provided within each of the snoop response groups. A bus master device that provides a bus master signal is designated. The bus master device receives the processor request. One of the snoop response groups and one of the transfer types are appropriately designated based on the processor request. The bus master signal is formulated from a snoop response group, a transfer type, a valid request signal, and a cache line address. The bus master signal is sent to all of the bus accessible memory devices on the cache bus line and to a combined response logic system. All of the bus accessible memory devices on the cache bus line send snoop responses in response to the bus master signal based on the designated snoop response group. The snoop responses are sent to the combined response logic system. A combined response by the combined response logic system is determined based on the appropriate combined response encoding logic determined by the designated and latched snoop response group. The combined response is sent to all of the bus accessible memory devices on the cache bus line.
摘要:
A data processing system includes at least first and second nodes and a segmented interconnect having coupled first and second segments. The first node includes the first segment and first and second agents coupled to the first segment, and the second node includes the second segment and a third agent coupled to the second segment. The first node further includes cancellation logic that, in response to the first agent issuing a request on the segmented interconnect that propagates from the first segment to the second segment and the second agent indicating ability to service the request, sends a cancellation message to the third agent instructing the third agent to ignore the request.
摘要:
A method of maintaining cache coherency, by designating one cache that owns a line as a highest point of coherency (HPC) for a particular memory block, and sending a snoop response from the cache indicating that it is currently the HPC for the memory block and can service a request. The designation may be performed in response to a particular coherency state assigned to the cache line, or based on the setting of a coherency token bit for the cache line. The processing units may be grouped into clusters, while the memory is distributed using memory arrays associated with respective clusters. One memory array is designated as the lowest point of coherency (LPC) for the memory block (i.e., a fixed assignment) while the cache designated as the HPC is dynamic (i.e., changes as different caches gain ownership of the line). An acknowledgement snoop response is sent from the LPC memory array, and a combined response is returned to the requesting device which gives priority to the HPC snoop response over the LPC snoop response.
摘要:
A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.
摘要:
A methodology and implementing system are provided in which PCI system configuration data is made available to a host X86 system CPU through an intermediate PowerPC system. A bus converter circuit connected between the X86 bus and the PowerPC bus is effective to translate configuration addresses between the X86 and the PowerPC system. A PCI host bridge arrangement includes a primary PCI host bridge circuit and a plurality of secondary peer PCI host bridge circuits. The primary host bridge circuit is effective to process configuration data requests from the bus converter circuit which are directed to any of the secondary PCI host bridge circuits.
摘要:
A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized. The invention is not limited to any particular type of instruction, and the synchronization functionality may be hardware or software programmable.
摘要:
A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized. The invention is not limited to any particular type of instruction, and the synchronization functionality may be hardware or software programmable.
摘要:
A data processing system includes an interconnect and first and second nodes, coupled to the interconnect, that each include at least one agent. Each agent within the first and second nodes outputs a snoop response in response to snooping a transaction on the interconnect. Utilizing the snoop response of each agent within the first node, first response logic within the first node produces a first cumulative combined response. This first cumulative combined response is then combined by second response logic in the second node with the snoop response of each agent in the second node to produce a second cumulative combined response. After a complete combined response is obtained in this manner, the complete combined response is distributed to all nodes so that each agent can determine its response, if any, to the transaction.
摘要:
A programmable agent and method for managing prefetch queues provide dynamically configurable handling of priorities in a prefetching subsystem for providing look-ahead memory loads in a computer system. When it's queues are at capacity an agent handling prefetches from memory either ignores new requests, forces the new requests to retry or cancels a pending request in order to perform the new request. The behavior can be adjusted under program control by programming a register, or the control may be coupled to a load pattern analyzer. In addition, the behavior with respect to new requests can be set to different types depending on a phase of a pending request.
摘要:
A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.