Abstract:
The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are track by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.
Abstract:
A method and apparatus for selecting and updating a replacement candidate in a cache is disclosed. In one embodiment, a cache miss may initiate the eviction of a present replacement candidate in a last-level cache. The cache miss may also initiate the selection of a future replacement candidate. Upon the selection of the future replacement candidate, the corresponding cache line may be invalidated in lower-level caches but remain resident in the last-level cache. The future replacement candidate may be updated by subsequent hits to the replacement candidate in the last-level cache prior to a subsequent cache miss.
Abstract:
A method and apparatus for using result-speculative data under run-ahead speculative execution is disclosed. In one embodiment, the uncommitted target data from instructions being run-ahead executed may be saved into an advance data table. This advance data table may be indexed by the lines in the instruction buffer containing the instructions for run-ahead execution. When the instructions are re-executed subsequent to the run-ahead execution, valid target data may be retrieved from the advance data table and supplied as part of a zero-clock bypass to support parallel re-execution. This may achieve parallel execution of dependent instructions. In other embodiments, the advance data table may be content-addressable-memory searchable on target registers and supply target data to general speculative execution.
Abstract:
A system may include a first portion and a processor. The first portion may include a first thermal sensor to provide first thermal data. The processor may include a core, a core thermal sensor to provide core thermal data, and an electrical power sensor to provide electrical data. The processor may also include a power management unit to selectively throttle the core based on the first thermal data, the core thermal data, and the electrical data.
Abstract:
The present invention provides a mechanism for supporting high bandwidth instruction fetching in a multi-threaded processor. A multi-threaded processor includes an instruction cache (I-cache) and a temporary instruction cache (TIC). In response to an instruction pointer (IP) of a first thread hitting in the I-cache, a first block of instructions for the thread is provided to an instruction buffer and a second block of instructions for the thread are provided to the TIC. On a subsequent clock interval, the second block of instructions is provided to the instruction buffer, and first and second blocks of instructions from a second thread are loaded into a second instruction buffer and the TIC, respectively.
Abstract:
A method and apparatus for improving dispersal performance of instruction threads is described. In one embodiment, the dispersal logic determines whether the instructions supplied to it include any NOP instructions. When a NOP instruction is detected, the dispersal logic places the NOP into a no-op port for execution. All other instructions are distributed to the proper execution pipes in a normal manner. Because the NOP instructions do not use the execution resources of other instructions, all instruction threads can be executed in one cycle.
Abstract:
Systems and techniques for virtual device sharing. A failure of one of a plurality of memory devices corresponding to a first rank in a memory system is detected. The memory system has a plurality of ranks, each rank having a plurality of memory devices used to store a cache line. A portion of the cache line corresponding to the failed memory device is stored in a memory device in a second rank in the memory system and the remaining portion of the cache line in the first rank of the memory system.
Abstract:
The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.
Abstract:
Methods and apparatus relating to Opportunistic Snoop Broadcast (OSB) in directory enabled home snoopy systems are described. In one embodiment, a plurality of snoops are broadcast to a plurality of caching agents in response to a request for data and based on a comparison of a bandwidth consumption of the link and a threshold value. Other embodiments are also disclosed.
Abstract:
The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.