摘要:
A method for reducing memory latency in a multi-node architecture. In one embodiment, a speculative read request is issued to a home node before results of a cache coherence protocol are determined. The home node initiates a read to memory to complete the speculative read request. Results of a cache coherence protocol may be determined by a coherence agent to resolve cache coherency after the speculative read request is issued.
摘要:
An example embodiment of a computer system utilizing a central snoop filter includes several nodes coupled together via a switching device. Each of the nodes may include several processors and caches as well as a block of system memory. All traffic from one node to another takes place through the switching device. The switching device includes a snoop filter that tracks cache line coherency information for all caches in the computer system. The snoop filter has enough entries to track the tags and state information for all entries in all caches in all of the system's nodes. In addition to the tag and state information, the snoop filter stores information indicating which of the nodes has a copy of each cache line. The snoop filter serves in part to keep snoop transactions from being performed at nodes that do not contain a copy of the subject cache line, thereby reducing system overhead, reducing traffic across the system interconnect busses, and reducing the amount of time required to perform snoop transactions.
摘要:
According to one embodiment, a method is disclosed. The method comprises receiving a read request from a first node in a multi-node computer system to read data from a memory at a second node. Subsequently, a write request from a third node is received to write data to the memory at the second node. The read request and write request is detected at conflict detection circuitry. Finally, read data from the memory at the second node is transmitted to the first node.
摘要:
A method for reducing memory latency in a multi-node architecture. In one embodiment, a speculative read request is issued to a home node before results of a cache coherence protocol are determined. The home node initiates a read to memory to complete the speculative read request. Results of a cache coherence protocol may be determined by a coherence agent to resolve cache coherency after the speculative read request is issued.
摘要:
A snoop filter maintains data coherency information for multiple caches in a multi-processor system. When a new request for a memory line arrives, an entry of the snoop filter is selected for replacement if there is no available slot in the snoop filter to accommodate the new request. The selected entry is among the entries predicted to be short-lived based on a coherency state. An invalidation message is sent to the one of the caches with which the selected entry is associated.