摘要:
A method and apparatus for optimizing line writes in cache coherent systems. A new cache line may be allocated without loading data to fill the new cache line when a store buffer coalesces enough stores to fill the cache line. Data may be loaded to fill the line if an insufficient number of stores are coalesced to fill the entire cache line. The cache line may be allocated by initiating a read and invalidate request and asserting a back-off signal to cancel the read if there is an indication that the coalesced stores will fill the cache line.
摘要:
Systems and methods for early fixed latency subtractive decoding are disclosed. The subtractive decoding device speculatively acknowledges a bus transaction within a fixed time period that is the same as the time period for positive decoding. Pipelining of a new bus transaction may therefore be accomplished each new time period. A bus transaction may be retried if no acknowledgement occurs within the fixed time period.
摘要:
A system and method for improved cache performance is disclosed. In one embodiment, a processor with a cache having a dirty cache line subject to eviction may send the dirty cache line to an available replacement block in another processor's cache. In one embodiment, an available replacement block may contain a cache line in an invalid state. In another embodiment, an available replacement block may contain a cache line in an invalid state or in a shared state. Multiple transfers of the dirty cache line to more than one processor's cache may be inhibited using a set of accept signals and backoff signals. These accept signals may be combined to inhibit multiple processors from accepting the dirty cache line, as well as to inhibit the system memory from accepting the dirty cache line.
摘要:
Some embodiments of the invention include an address interconnect and a data interconnect to transfer data among a number of devices. The data interconnect is configured to transfer data among the devices via multiple transfer paths. A transfer of data on one transfer path is independent from a transfer of data on another transfer path. In some cases, data is concurrently transferred among more than two of the devices on at least one of the address interconnect and the data interconnect. Other embodiments are described and claimed.
摘要:
A multiprocessor system may include multiple processors and multiple caches associated with the processors. The system may employ a memory snarfing technique to reduce writes to the system (or main) memory. Cache-ownership capable agents, e.g., agents with write-back caches, may snarf the data (obtain the cache line) if the required cache line is in a valid state in the agent's cache.
摘要:
A computer system has a plurality of processors in a multiprocessor system with each processor associated with a cache memory. The cache traffic is monitored by the respective processors to determine the load for each of the cache memories. Signals corresponding to the cache loads are generated and analyzed. A target processor is selected for a push data operation from a bus agent to the cache memory using the load information. The push operations to the caches are optimized based on the cache traffic information.
摘要:
Systems and methods for early fixed latency subtractive decoding are disclosed. The subtractive decoding device speculatively acknowledges a bus transaction within a fixed time period that is the same as the time period for positive decoding. Pipelining of a new bus transaction may therefore be accomplished each new time period. A bus transaction may be retried if no acknowledgement occurs within the fixed time period.
摘要:
Systems and methods for early fixed latency subtractive decoding are disclosed. The subtractive decoding device speculatively, or conditionally, acknowledges a bus transaction within a fixed time period that is the same as the time period for positive decoding. Pipelining of a new bus transaction may therefore be accomplished each new time period. A bus transaction may be retried if no acknowledgement occurs within the fixed time period.