Abstract:
A cache coherence protocol facilitates a distributed cache coherency conflict resolution in a multi-node system to resolve conflicts at a home node.
Abstract:
A method and device for determining an attribute associated with a locked load instruction and selecting a lock protocol based upon the attribute of the locked load instruction. Also disclosed is a method for concurrently executing the respective lock sequences associated with multiple threads of a processing device.
Abstract:
Techniques to control power and processing among a plurality of asymmetric cores. In one embodiment, one or more asymmetric cores are power managed to migrate processes or threads among a plurality of cores according to the performance and power needs of the system.
Abstract:
Techniques to control power and processing among a plurality of asymmetric cores. In one embodiment, one or more asymmetric cores are power managed to migrate processes or threads among a plurality of cores according to the performance and power needs of the system.
Abstract:
Techniques to control power and processing among a plurality of asymmetric cores. In one embodiment, one or more asymmetric cores are power managed to migrate processes or threads among a plurality of cores according to the performance and power needs of the system
Abstract:
A method and apparatus for handling low power and high performance loads is herein described. Software, such as a compiler, is utilized to identify producer loads, consumer reuse loads, and consumer forwarded loads. Based on the identification by software, hardware is able to direct performance of the load directly to a load value buffer, a store buffer, or a data cache. As a result, accesses to cache are reduced, through direct loading from load and store buffers, without sacrificing load performance.
Abstract:
A high speed DRAM cache architecture. One disclosed embodiment includes a multiplexed bus interface to interface with a multiplexed bus. A cache control circuit drives a row address portion of an address on the multiplexed bus interface and a command to open a memory page containing data for a plurality of ways. The cache control circuit subsequently drives a column address including at least a way indicator to the multiplexed bus interface.
Abstract:
A high speed DRAM cache architecture. One disclosed embodiment includes a multiplexed bus interface to interface with a multiplexed bus. A cache control circuit drives a row address portion of an address on the multiplexed bus interface and a command to open a memory page containing data for a plurality of ways. The cache control circuit subsequently drives a column address including at least a way indicator to the multiplexed bus interface.
Abstract:
A high speed DRAM cache architecture. One disclosed embodiment includes a multiplexed bus interface to interface with a multiplexed bus. A cache control circuit drives a row address portion of an address on the multiplexed bus interface and a command to open a memory page containing data for a plurality of ways. The cache control circuit subsequently drives a column address including at least a way indicator to the multiplexed bus interface.
Abstract:
Systems and methods of managing transactions provide for receiving a first flush command at a first I/O hub, wherein the first flush command is dedicated to non-posted transactions. One embodiment further provides for halting an inbound ordering queue of the first I/O hub with regard to non-posted transactions in response to the first flush command and flushing a non-posted transaction from an outgoing buffer of the first I/O hub to a second I/O hub while the inbound ordering queue is halted with regard to non-posted transactions.