摘要:
A method, programmed medium and system are provided for enabling a core's cache capacity to be increased by using the caches of the disabled or non-enabled cores on the same chip. Caches of disabled or non-enabled cores on a chip are made accessible to store cachelines for those chip cores that have been enabled, thereby extending cache capacity of enabled cores.
摘要:
A method, programmed medium and system are provided for enabling a core's cache capacity to be increased by using the caches of the disabled or non-enabled cores on the same chip. Caches of disabled or non-enabled cores on a chip are made accessible to store cachelines for those chip cores that have been enabled, thereby extending cache capacity of enabled cores.
摘要:
Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.
摘要:
Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss in response to determining the page address miss in a page addressing buffer, and saving the mapping as an entry in an array. The computer implemented method creates a preserved array from the array in response to determining the dispatched partition to be an undispatched partition. The computer implemented method further analyzes of the preserved array for a compressed page in response to determining the undispatched partition is now redispatched, and decompresses the compressed page prior to the partition being redispatched.
摘要:
A method, a system, and computer readable program code for managing cache coherence in a virtual machine managed system are provided. In response to a processor issuing a message to be broadcast, a determination is made as to whether the processor is part of a virtual domain. In response to a determination that the processor is part of the virtual domain, the message and a first bit mask are sent from a source node to a destination node. In response to receiving the message and the first bit mask, one of a primary link or a secondary link is selected to send the message and the first bit mask over, forming a selected link. The message and the first bit mask are sent to the destination node over the selected link.
摘要:
A mechanism for optimizing system performance using spare processing cores in a virtualized environment. When detecting a workload partition needs to run on a virtual processor in the virtualized system, a state of the virtual processor is changed to a wait state. A first node comprising memory that is local to the workload partition is determined. A determination is also made as to whether a non-spare processor core in the first node is available to run the workload partition. If no non-spare processor core is available, a free non-spare processor core in a second node is located, and the state of the free non-spare processor core in the second node is changed to an inactive state. The state of a spare processor core in the first node is changed to an active state, and the workload partition is dispatched to the spare processor core in the first node for execution.
摘要:
A mechanism for optimizing system performance using spare processing cores in a virtualized environment. When detecting a workload partition needs to run on a virtual processor in the virtualized system, a state of the virtual processor is changed to a wait state. A first node comprising memory that is local to the workload partition is determined. A determination is also made as to whether a non-spare processor core in the first node is available to run the workload partition. If no non-spare processor core is available, a free non-spare processor core in a second node is located, and the state of the free non-spare processor core in the second node is changed to an inactive state. The state of a spare processor core in the first node is changed to an active state, and the workload partition is dispatched to the spare processor core in the first node for execution.
摘要:
Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss to create a found real page address and page size combination, responsive to determining the page address miss in a page addressing buffer, and saving the found real page address and page size combination as an entry in set of entries in an array. Further in the computer implemented method, creating a preserved array from the array, responsive to determining the dispatched partition to be an undispatched partition. The computer implemented method further, analyzing each entry of the preserved array for a compressed page, responsive to determining the undispatched partition is now redispatched, and invoking a partition management firmware function to decompress the compressed page, prior to the partition being redispatched, responsive to determining a compressed page.
摘要:
Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.
摘要:
Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.