摘要:
One embodiment of the present invention provides a system that facilitates avoiding locks by speculatively executing critical sections of code. During operation, the system allows a process to speculatively execute a critical section of code within a program without first acquiring a lock associated with the critical section. If the process subsequently completes the critical section without encountering an interfering data access from another process, the system commits changes made during the speculative execution, and resumes normal non-speculative execution of the program past the critical section. Otherwise, if an interfering data access from another process is encountered during execution of the critical section, the system discards changes made during the speculative execution, and attempts to re-execute the critical section.
摘要:
A processor includes a device providing a throttling power output signal. The throttling power output signal is used to determine when to logically throttle the power consumed by the processor. At least one core in the processor includes a pipeline having a decode pipe; and a logical power throttling unit coupled to the device to receive the output signal, and coupled to the decode pipe. Following the logical power throttling unit receiving the power throttling output signal satisfying a predetermined criterion, the logical power throttling unit causes the decode pipe to reduce an average number of instructions decoded per processor cycle without physically changing the processor cycle or any processor supply voltages.
摘要:
One embodiment of the present invention provides a system that selectively monitors load instructions to support transactional execution of a process, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes. Upon encountering a load instruction during transactional execution of a block of instructions, the system determines whether the load instruction is a monitored load instruction or an unmonitored load instruction. If the load instruction is a monitored load instruction, the system performs the load operation, and load-marks a cache line associated with the load instruction to facilitate subsequent detection of an interfering data access to the cache line from another process. If the load instruction is an unmonitored load instruction, the system performs the load operation without load-marking the cache line.
摘要:
Architectural techniques and implementations that defer enforcement of certain delayed control transfer instruction (DCTI) sequencing constraints or conventions to later stages of an execution pipeline are described. In this way, complexity of a processor pipeline front-end (including fetch sequencing) can be simplified, at least in-part, by fetching instructions generally without regard to such constraints or conventions. Instead, enforcement of such sequencing constraints and/or conventions may be deferred to one or more pipeline stages associated with commitment or retirement of instructions. Higher fetch bandwidth may be achieved in some realizations when, for example, DCTI couples are encountered in an execution sequence.
摘要:
One embodiment of the present invention supports execution of a start transactional execution (STE) instruction, which marks the beginning of a block of instructions to be executed transactionally. Upon encountering the STE instruction during execution of a program, the system commences transactional execution of the block of instructions following the STE instruction. Changes made during this transactional execution are not committed to the architectural state of the processor until the transactional execution successfully completes.
摘要:
One embodiment of the present invention provides a system that selectively monitors load instructions to support transactional execution of a process, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes. Upon encountering a load instruction during transactional execution of a block of instructions, the system determines whether the load instruction is a monitored load instruction or an unmonitored load instruction. If the load instruction is a monitored load instruction, the system performs the load operation, and load-marks a cache line associated with the load instruction to facilitate subsequent detection of an interfering data access to the cache line from another process. If the load instruction is an unmonitored load instruction, the system performs the load operation without load-marking the cache line.
摘要:
One embodiment of the present invention provides a system for releasing a memory location from transactional program execution. The system operates by executing a sequence of instructions during transactional program execution, wherein memory locations involved in the transactional program execution are monitored to detect interfering accesses from other threads, and wherein changes made during transactional execution are not committed until transactional execution completes without encountering an interfering data access from another thread. Upon encountering a release instruction for a memory location during the transactional program execution, the system modifies state information within the processor to release the memory location from monitoring. The system also executes a commit-and-start-new-transaction instruction, wherein the commit-and-start-new-transaction instruction atomically commits the transaction's stores, thereby removing them from the transaction's write set while the transaction's read set remains unaffected.
摘要:
One embodiment of the present invention provides a system that corrects bit errors in temporary results within a central processing unit (CPU). During operation, the system receives a temporary result during execution of an in-flight instruction. Next, the system generates a parity bit for the temporary result, and stores the temporary result and the parity bit in a temporary register within the CPU. Before the temporary result is committed to the architectural state of the CPU, the system checks the temporary result and the parity bit to detect a bit error. If a bit error is detected, the system performs a micro-trap operation to re-execute the instruction that generated the temporary result, thereby regenerating the temporary result. Otherwise, if a bit error is not detected, the system commits the temporary result to the architectural state of the CPU.
摘要:
One embodiment of the present invention provides a system that facilitates selectively unmarking load-marked cache lines during transactional program execution, wherein load-marked cache lines are monitored during transactional execution to detect interfering accesses from other threads. During operation, the system encounters a release instruction during transactional execution of a block of instructions. In response to the release instruction, the system modifies the state of cache lines, which are specially load-marked to indicate they can be released from monitoring, to account for the release instruction being encountered. In doing so, the system can potentially cause the specially load-marked cache lines to become unmarked. In a variation on this embodiment, upon encountering a commit-and-start-new-transaction instruction, the system modifies load-marked cache lines to account for the commit-and-start-new-transaction instruction being encountered. In doing so, the system causes normally load-marked cache lines to become unmarked, while other specially load-marked cache lines may remain load-marked past the commit-and-start-new-transaction instruction.
摘要:
One embodiment of the present invention provides a system that facilitates delaying interfering memory accesses from other threads during transactional execution. During transactional execution of a block of instructions, the system receives a request from another thread (or processor) to perform a memory access involving a cache line. If performing the memory access on the cache line will interfere with the transactional execution and if it is possible to delay the memory access, the system delays the memory access and stores copy-back information for the cache line to enable the cache line to be copied back to the requesting thread. At a later time, when the memory access will no longer interfere with the transactional execution, the system performs the memory access and copies the cache line back to the requesting thread.