摘要:
A processor includes a device providing a throttling power output signal. The throttling power output signal is used to determine when to logically throttle the power consumed by the processor. At least one core in the processor includes a pipeline having a decode pipe; and a logical power throttling unit coupled to the device to receive the output signal, and coupled to the decode pipe. Following the logical power throttling unit receiving the power throttling output signal satisfying a predetermined criterion, the logical power throttling unit causes the decode pipe to reduce an average number of instructions decoded per processor cycle without physically changing the processor cycle or any processor supply voltages.
摘要:
A register file, in a processor, includes a first plurality of registers of a first size, n-bits. A decoder uses a mapping that divides the register file into a second plurality M of registers having a second size. Each of the registers having the second size is assigned a different name in a continuous name space. Each register of the second size includes a plurality N of registers of the first size, n-bits. Each register in the plurality N of registers is assigned the same name as the register of the second size that includes that plurality. State information is maintained in the register file for each n-bit register. The dependence of an instruction on other instructions is detected through the continuous name space. The state information allows the processor to determine when the information in any portion, or all, of a register is valid.
摘要:
Embodiments of the present invention provide a system for executing program code on a processor. In these embodiments, the processor is configured to start by using a primary strand to execute program code. Upon detecting a predetermined condition, the processor is configured to instantaneously checkpoint an architectural state of the primary strand and then use the subordinate strand to copy the checkpointed state to memory while using the primary strand to continue executing the program code without interruption.
摘要:
One embodiment of the present invention provides a system that prevents data hazards during simultaneous speculative threading. The system starts by executing instructions in an execute-ahead mode using a first thread. While executing instructions in the execute-ahead mode, the system maintains dependency information for each register indicating whether the register is subject to an unresolved data dependency. Upon the resolution of a data dependency during execute-ahead mode, the system copies dependency information to a speculative copy of the dependency information. The system then commences execution of the deferred instructions in a deferred mode using a second thread. While executing instructions in the deferred mode, if the speculative copy of the dependency information for a destination register indicates that a write-after-write (WAW) hazard exists with a subsequent non-deferred instruction executed by the first thread in execute-ahead mode, the system uses the second thread to execute the deferred instruction to produce a result and forwards the result to be used by subsequent deferred instructions without committing the result to the architectural state of the destination register. Hence, the system makes the result available to the subsequent deferred instructions without overwriting the result produced by a following non-deferred instruction.
摘要:
One embodiment of the present invention provides a system which creates multiple checkpoints in a processor that supports speculative-execution. The system starts by issuing instructions for execution in program order during execution of a program in a normal-execution mode. Upon encountering a launch condition during an instruction which causes a processor to enter execute-ahead mode, the system performs an initial checkpoint and commences execution of instructions in execute-ahead mode. Upon encountering a predefined condition during execute-ahead mode, the system generates an additional checkpoint and continues to execute instructions in execute-ahead mode. Generating the additional checkpoint allows the processor to return to the additional checkpoint, instead of the previous checkpoint, if the processor subsequently encounters a condition that requires the processor to return to a checkpoint.
摘要:
One embodiment of the present invention provides a system that enforces memory reference ordering requirements, such as Total Store Ordering (TSO), at a Level 1 (L1) cache in a multiprocessor. During operation, while executing instructions in a speculative-execution mode, the system receives an invalidation signal for a cache line at the L1 cache wherein the invalidation signal is received from a cache-coherence system within the multiprocessor. In response to the invalidation signal, if the cache line exists in the L1 cache, the system examines a load-mark in the cache line, wherein the load-mark being set indicates that the cache line has been loaded from during speculative execution. If the load-mark is set, the system fails the speculative-execution mode and resumes a normal-execution mode from a checkpoint. By failing the speculative-execution mode, the system ensures that a potential update to the cache line indicated by the invalidation signal will not cause the memory reference ordering requirements to be violated during the speculative-execution mode.
摘要:
One embodiment of the present invention supports execution of a start transactional execution (STE) instruction, which marks the beginning of a block of instructions to be executed transactionally. Upon encountering the STE instruction during execution of a program, the system commences transactional execution of the block of instructions following the STE instruction. Changes made during this transactional execution are not committed to the architectural state of the processor until the transactional execution successfully completes.
摘要:
One embodiment of the present invention provides a system that supports executing a fail instruction, which terminates transactional execution of a block of instructions. During operation, the system facilitates transactional execution of a block of instructions within a program, wherein changes made during the transactional execution are not committed to the architectural state of the processor until the transactional execution successfully completes. If a fail instruction is encountered during this transactional execution, the system terminates the transactional execution without committing results of the transactional execution to the architectural state of the processor.
摘要:
One embodiment of the present invention provides a system that synchronizes threads on a multi-threaded processor. The system starts by executing instructions from a multi-threaded program using a first thread and a second thread. When the first thread reaches a predetermined location in the multi-threaded program, the first thread executes a Start-Transactional-Execution (STE) instruction to commence transactional execution, wherein the STE instruction specifies a location to branch to if transactional execution fails. During the subsequent transactional execution, the first thread accesses a mailbox location in memory (which is also accessible by the second thread) and then executes instructions that cause the first thread to wait. When the second thread reaches a second predetermined location in the multi-threaded program, the second thread signals the first thread by accessing the mailbox location, which causes the transactional execution of the first thread to fail, thereby causing the first thread to resume non-transactional execution from the location specified in the STE instruction. In this way, the second thread can signal to the first thread without the first thread having to poll a shared variable.
摘要:
A technique for operating a computing apparatus includes allocating a working register file entry corresponding to a register in a working register file when an instruction referencing the register proceeds through a particular stage of the computing apparatus. The technique maintains the working register file entry until at least a predetermined number of subsequent instructions have similarly proceeded through the particular stage.