摘要:
A method, apparatus and computer instructions are provided to autonomically monitor and adjust system characteristics based on a customer optimization goal specified in a policy or profile. An autonomic management component is implemented in firmware comprising a set of control algorithms. Response to reading system characteristics from a plurality of sensors, the autononmic management component selects at least one control algorithm from the set and the control algorithm adjusts the parameters of the system characteristic to optimize performance according to the optimization goal specified by the customer.
摘要:
A method, apparatus, and computer program product are disclosed for selectively prohibiting speculative conditional branch execution. A particular type of conditional branch instruction is selected. An indication is stored within each instruction that is the particular type of conditional branch instruction. A processor then fetches a first instruction from code that is to be executed. A determination is made regarding whether the first instruction includes the indication. In response to determining that the instruction includes the indication: speculative execution of the first instruction is prohibited, an actual location to which the first instruction will branch is resolved, and execution of the code is branched to the actual location. In response to determining that the instruction does not include the indication, the first instruction is speculatively executed.
摘要:
A system and method for issuing a processor instruction to multiple processing sections arranged in an out-of-order processing pipeline architecture. The multiple processing sections include a first execution unit with a pipeline length and a second execution unit operating upon data produced by the first execution unit. An instruction issue unit accepts a complex instruction that is cracked into respective micro-ops for the first execution unit and the second execution unit. The instruction issue unit issues the first micro-op to the first execution unit to produce intermediate data. The instruction issue unit then delays for a time period corresponding to the processing pipeline length of the first execution unit. After the delay, a second micro-op is issued to the second execution unit.
摘要:
A method and logical apparatus for managing processing system resource use for speculative execution reduces the power and performance burden associated with inefficient speculative execution of program instructions. A measure of the efficiency of speculative execution is used to reduce resources allocated to a thread while the speculation efficiency is low. The resource control applied may be the number of instruction fetches allocated to the thread or the number of execution time slices. Alternatively, or in combination, the size of a prefetch instruction storage allocated to the thread may be limited. The control condition may be comparison of the number of correct or incorrect speculations to a threshold, comparison of the number of correct to incorrect speculations, or a more complex evaluator such as the size of a ratio of incorrect to total speculations.
摘要:
A method, information processing system, and computer program product crack and/or shorten computer executable instructions. At least one instruction is received. The at least on instruction is analyzed. An instruction type associated with the at least one instruction is identified. At least one of a base field, an index field, one or more operands, and a mask field of the instruction are analyzed. At least one of the following is then performed: the at least one instruction is organized into a set of unit of operation; and the at least one instruction is shortened. The set of unit of operations is then executed.
摘要:
A system and method for issuing a processor instruction to multiple processing sections arranged in an out-of-order processing pipeline architecture. The multiple processing sections include a first execution unit with a pipeline length and a second execution unit operating upon data produced by the first execution unit. An instruction issue unit accepts a complex instruction that is cracked into respective micro-ops for the first execution unit and the second execution unit. The instruction issue unit issues the first micro-op to the first execution unit to produce intermediate data. The instruction issue unit then delays for a time period corresponding to the processing pipeline length of the first execution unit. After the delay, a second micro-op is issued to the second execution unit.
摘要:
A multithreaded processor, fetch control for a multithreaded processor and a method of fetching in the multithreaded processor. Processor event and use (EU) signs are monitored for downstream pipeline conditions indicating pipeline execution thread states. Instruction cache fetches are skipped for any thread that is incapable of receiving fetched cache contents, e.g., because the thread is full or stalled. Also, consecutive fetches may be selected for the same thread, e.g., on a branch mis-predict. Thus, the processor avoids wasting power on unnecessary or place keeper fetches.