Abstract:
A multiprocessor data processing system includes first and second processors coupled to an interconnect and to a global promotion facility containing a plurality of promotion bit fields. The first processor executes a single acquisition instruction to concurrently acquire a plurality of promotion bit fields exclusive of at least the second processor. In response to execution of the acquisition instruction, the first processor receives an indication of success or failure of the acquisition instruction, wherein the indication indicates success of the acquisition instruction if all of the plurality of promotion bit fields were concurrently acquired by the first processor and indicates failure of the acquisition instruction if fewer than all of the plurality of promotion bit fields were acquired by the first processor.
Abstract:
A data processing system includes a global promotion facility and a plurality of processors coupled by an interconnect. In response to execution of an acquisition instruction by a first processor among the plurality of processors, the first processor transmits an address-only operation on the interconnect to acquire a promotion bit field within the global promotion facility exclusive of at least a second processor among the plurality of processors. In response to receipt of a combined response for the address-only operation representing a collective response of others of the plurality of processors to the address-only operation, the first processor determines whether or not acquisition of the promotion bit field was successful by reference to the combined response.
Abstract:
A symmetric multiprocessor data processing system having an apparatus for imprecisely tracking cache line inclusivity of a higher level cache is disclosed. The symmetric multiprocessor data processing system includes multiple processing units. Each of the processing units is associated with a level one cache memory. All the level one cache memories are associated with an imprecisely inclusive level two cache memory. The imprecisely inclusive level two cache memory includes a tracking means for imprecisely tracking cache line inclusivity of the level one cache memories. The tracking means includes a last_processor_to_store field and a more_than_two_loads field per cache line. When the more_than_two_loads field is asserted, except for a specific cache line in the level one cache memory associated with the processor indicated in the last_processor_to_store field, all cache lines within the level one cache memories that shared identical information with that specific cache line are invalidated.
Abstract:
A method and data processing system that supports pipelining of Input/Output (I/O) DMA Write transactions. An I/O processor's operational protocol is provided with a pair of instructions/commands that are utilized to complete a DMA Write operation. The instructions are DMA_Write_No_Data and DMA_Write_With_Data. DMA_Write_No_Data is an address-only operation on the system bus that is utilized to acquire ownership of a cache line that is to be written. The ownership of the cache line is marked by a weak DMA state, which indicates that the cache line is being held for writing to the memory, but that the cache line cannot yet force a retry of snooped operations. When each preceding DMA Write operation has completed or each corresponding DMA_Write_No_Data operation has been placed in a DMA Exclusive state, then the weak DMA state is changed to a DMA Exclusive state, which forces a retry of snooped operations until the write transaction to memory is completed. In this way, DMA Writes that are provided sequentially may be issued in a parallel manner on the system bus and their corresponding DMA_Write_No_Data operations may be completed in any order, but cannot be made DMA Exclusive unless the above conditions are satisfied. Further, once a DMA Exclusive state is acquired, a DMA_Write_With_Data may be issued for each of the sequential DMA Write operations in the DMA Exclusive state. The DMA_Write_With_Data may then be completed out-of-order with respect to each other. However, the system processor is sent the completion messages in the sequential order of the DMA Write operations, thus adhering to the processor requirements for ordered operations while providing fully-pipelined (parallel) execution of the DMA transactions.
Abstract:
A multiprocessor data processing system comprising, in addition to a first and second processor having an respective first and second cache and a main cache directory affiliated with the first processor's cache, a secondary cache directory of the first cache, which contains a subset of cache line addresses from the main cache directory corresponding to cache lines that are in a first or second coherency state, where the second coherency state indicates to the first processor that requests issued from the first processor for a cache line whose address is within the secondary directory should utilize super-coherent data currently available in the first cache and should not be issued on the system interconnect. Additionally, the cache controller logic includes a clear on barrier flag (COBF) associated with the secondary directory, which is set whenever an operation of the first processor is issued to said system interconnect. If a barrier instruction is received by the first processor while the COBF is set, the contents of the secondary directory are immediately flushed and the cache lines are tagged with an invalid state.
Abstract:
A method for improving performance of a multiprocessor data processing system comprising snooping a request for data held within a shared cache line on a system bus of the data processing system whose cache contains an updated copy of the shared cache line, and responsive to a snoop of the request by the second processor, issuing a first response on the system bus indicating to the requesting processor that the requesting processor may utilize data currently stored within the shared cache line of a cache of the requesting processor. When the request is snooped by the second processor and the second processor decides to release a lock on the cache line to the requesting processor, the second processor issues a second response on the system bus indicating that the first processor should utilize new/coherent data and then the second processor releases the lock to the first processor.
Abstract:
Disclosed herein are a data processing system and method of operating a data processing system that arbitrate between conflicting requests to modify data cached in a shared state and that protect ownership of the cache line granted during such arbitration until modification of the data is complete. The data processing system includes a plurality of agents coupled to an interconnect that supports pipelined transactions. While data associated with a target address are cached at a first agent among the plurality of agents in a shared state, the first agent issues a transaction on the interconnect. In response to snooping the transaction, a second agent provides a snoop response indicating that the second agent has a pending conflicting request and a coherency decision point provides a snoop response granting the first agent ownership of the data. In response to the snoop responses, the first agent is provided with a combined response representing a collective response to the transaction of all of the agents that grants the first agent ownership of the data. In response to the combined response, the first agent is permitted to modify the data.
Abstract:
A non-uniform memory access (NUMA) computer system and associated method of operation are disclosed. The NUMA computer system includes at least a remote node and a home node coupled to an interconnect. The remote node contains at least one processing unit coupled to a remote system memory, and the home node contains at least a home system memory. To reduce access latency for data from other nodes, a portion of the remote system memory is allocated as a remote memory cache containing data corresponding to data resident in the home system memory. In one embodiment, access bandwidth to the remote memory cache is increased by distributing the remote memory cache across multiple system memories in the remote node.
Abstract:
A method of operation within a processor that permits load instructions following barrier instructions in an instruction sequence to be issued speculatively. The barrier instruction is executed and while the barrier operation is pending, a load request associated with the load instruction is speculatively issued. A speculation flag is set to indicate the load instruction was speculatively issued. The flag is reset when an acknowledgment of the barrier operation is received. Data that is returned before the acknowledgment is received is temporarily held, and the data is forwarded to the register and/or execution unit of the processor only after the acknowledgment is received. If a snoop invalidate is detected for the speculatively issued load request before the barrier operation completes, the data is discarded and the load request is re-issued.
Abstract:
A method for improving performance of a multiprocessor data processing system having processor groups with shared caches. When a processor within a processor group that shares a cache snoops a modification to a shared cache line in a cache of another processor that is not within the processor group, the coherency state of the shared cache line within the first cache is set to a first coherency state that indicates that the cache line has been modified by a processor not within the processor group and that the cache line has not yet been updated within the group's cache. When a request for the cache line is later issued by a processor, the request is issued to the system bus or interconnect. If a received response to the request indicates that the processor should utilize super-coherent data, the coherency state of the cache line is set to a processor-specific super coherency state. This state indicates that subsequent requests for the cache line by the first processor should be provided said super-coherent data, while a subsequent request for the cache line by a next processor in the processor group that has not yet issued a request for the cache line on the system bus, may still go to the system bus to request the cache line. The individualized, processor-specific super coherency states are individually set but are usually changed to another coherency state (e.g., Modified or Invalid) as a group.