摘要:
A system and computer program product allocates energy entitlement to a logical partition (LPAR) executing on a data processing system. An energy entitlement allocation (EEA) utility enables an administrator to specify a minimum and/or maximum energy entitlement and An LPAR priority. When the relevant LPARs utilize the respective minimum energy entitlement based on respective energy consumption, the EEA utility determines whether the LPAR (and other LPARs) has satisfied a respective maximum energy entitlement. When the LPAR has not satisfied its maximum energy entitlement, the EEA utility allocates unused energy entitlement from the data processing system to the LPAR, according to an allocation policy. Additionally, the EEA utility dynamically adjusts a priority level for the LPAR to efficiently control resource allocation, according to the LPAR's energy consumption relative to its energy entitlement. In addition, the EEA utility is able to transfer unused energy entitlement to other data processing systems requiring further allocation of energy entitlement.
摘要:
A method allocates energy entitlement to a logical partition (LPAR) executing on a data processing system. An energy entitlement allocation (EEA) utility enables an administrator to specify a minimum and/or maximum energy entitlement and an LPAR priority. When the relevant LPARs utilize the respective minimum energy entitlement based on a respective energy consumption, the EEA utility determines whether the LPAR(s) has satisfied a respective maximum energy entitlement. When the LPAR has not satisfied its maximum energy entitlement, the EEA utility allocates unused energy entitlement from the data processing system to the LPAR, according to an allocation policy. Additionally, the EEA utility dynamically adjusts a priority level for the LPAR to efficiently control resource allocation, according to the LPAR's energy consumption relative to its energy entitlement. In addition, the EEA utility is able to transfer unused energy entitlement to other data processing systems requiring further allocation of energy entitlement.
摘要:
A method allocates energy entitlement to a logical partition (LPAR) executing on a data processing system. An energy entitlement allocation (EEA) utility enables an administrator to specify a minimum and/or maximum energy entitlement and an LPAR priority. When the relevant LPARs utilize the respective minimum energy entitlement based on a respective energy consumption, the EEA utility determines whether the LPAR(s) has satisfied a respective maximum energy entitlement. When the LPAR has not satisfied its maximum energy entitlement, the EEA utility allocates unused energy entitlement from the data processing system to the LPAR, according to an allocation policy. Additionally, the EEA utility dynamically adjusts a priority level for the LPAR to efficiently control resource allocation, according to the LPAR's energy consumption relative to its energy entitlement. In addition, the EEA utility is able to transfer unused energy entitlement to other data processing systems requiring further allocation of energy entitlement.
摘要:
A target computing system 10 is adapted to support a register window architecture, particularly for use when converting non-native subject code 17 instead into target code 21 executed by a target processor 13. A subject register stack data structure (an “SR stack”) 400 in memory has a plurality of frames 410 each containing a set of entries 401 corresponding to a subset of subject registers 502 of one register window 510 in a subject processor 3. The SR stack 400 is accessed by the target code 21 executing on the target processor 13. The SR stack 400 stores a large plurality of such frames 410 and thereby avoids overhead such as modelling automatic spill and fill operations from the windowed register file of the subject architecture. In one embodiment, a target computing system 10 having sixteen general purpose working registers is adapted to support a register window architecture reliant upon a register file containing tens or hundreds of subject registers 502.
摘要:
A target computing system 10 is adapted to support a register window architecture, particularly for use when converting non-native subject code 17 instead into target code 21 executed by a target processor 13. A subject register stack data structure (an “SR stack”) 400 in memory has a plurality of frames 410 each containing a set of entries 401 corresponding to a subset of subject registers 502 of one register window 510 in a subject processor 3. The SR stack 400 is accessed by the target code 21 executing on the target processor 13. The SR stack 400 stores a large plurality of such frames 410 and thereby avoids overhead such as modelling automatic spill and fill operations from the windowed register file of the subject architecture. In one embodiment, a target computing system 10 having sixteen general purpose working registers is adapted to support a register window architecture reliant upon a register file containing tens or hundreds of subject registers 502.
摘要:
A native binding technique is provided for insetting calls to native functions during translation of subject code to target code, such that function calls in the subject program to subject code functions are replaced in target code with calls to native equivalents of the same functions. Parameters of native function calls are transformed from target code representations to be consistent with native code representations, native code calling conventions, and native function prototypes.
摘要:
A target computing system 10 is adapted to support a register window architecture, particularly for use when converting non-native subject code 17 instead into target code 21 executed by a target processor 13. A subject register stack data structure (an “SR stack”) 400 in memory has a plurality of frames 410 each containing a set of entries 401 corresponding to a subset of subject registers 502 of one register window 510 in a subject processor 3. The SR stack 400 is accessed by the target code 21 executing on the target processor 13. The SR stack 400 stores a large plurality of such frames 410 and thereby avoids overhead such as modelling automatic spill and fill operations from the windowed register file of the subject architecture. In one embodiment, a target computing system 10 having sixteen general purpose working registers is adapted to support a register window architecture reliant upon a register file containing tens or hundreds of subject registers 502.
摘要:
A target computing system 10 is adapted to support a register window architecture, particularly for use when converting non-native subject code 17 instead into target code 21 executed by a target processor 13. A subject register stack data structure (an “SR stack”) 400 in memory has a plurality of frames 410 each containing a set of entries 401 corresponding to a subset of subject registers 502 of one register window 510 in a subject processor 3. The SR stack 400 is accessed by the target code 21 executing on the target processor 13. The SR stack 400 stores a large plurality of such frames 410 and thereby avoids overhead such as modelling automatic spill and fill operations from the windowed register file of the subject architecture. In one embodiment, a target computing system 10 having sixteen general purpose working registers is adapted to support a register window architecture reliant upon a register file containing tens or hundreds of subject registers 502.
摘要:
A technique is provided for handling dynamically linked subject function calls arranged pass subject control flow to an intermediate control structure such as a procedure linkage table, then to subject linker code for modifying link information associated with the subject function calls during translation of subject code into target code in a dynamic binary translator. The subject code for execution on a subject processor is received by a translator, and corresponding target code for execution on the target processor is generated. The translator is arranged to build a function linkage table containing an entry giving the location of each function called by the subject code, so that code can be generated by the translator in which subject function calls are associated with code for performing the function, without generating target code corresponding to the intermediate control structure.
摘要:
A dynamic code translator with isoblocking uses a return trampoline having branch instructions conditioned on different isostates to optimize return address translation, by allowing the hardware to predict that the address of a future return will be the address of trampoline. An IP relative call is inserted into translated code to write the trampoline address to a target link register and a target return address stack used by the native machine to predict return addresses. If a computed subject return address matches a subject return address register value, the current isostate of the isoblock is written to an isostate register. The isostate value in the isostate register is then used to select the branch instruction in the trampoline for the true subject return address. Sufficient code area in the trampoline instruction set can be reserved for a number of compare/branch pairs which is equal to the number of available isostates.