摘要:
A method, system, and computer usable program product for managing thermal condition of a memory are provided in the illustrative embodiments. A condition that a threshold value of a thermal condition of the memory has been exceeded or is likely to be exceeded is identified. A portion of a first workload is identified as being a cause of exceeding the threshold. A second portion of a second workload is identified, the second portion not causing the threshold to be exceeded when executed. A set of operations corresponding to the first portion is interleaved with a second set of operations corresponding to the second portion. The interleaved first and second portions of the first and second workloads are executed, causing the thermal condition of the memory to remain below the threshold. The second portion may use a second memory, a second area of the memory, or a combination thereof when executing.
摘要:
A symmetric multi-processor SMP system includes an SMP processor and operating system OS software that performs automatic SMP lock tracing analysis on an executing application program. System administrators, users or other entities initiate an automatic SMP lock tracing analysis. A particular thread of the executing application program requests and obtains a lock for a memory address pointer. A subsequent thread requests the same memory address pointer lock prior to the particular thread release of that lock. The subsequent thread begins to spin waiting for the release of that address pointer lock. When the subsequent thread reaches a predetermined maximum amount of wait time, MAXSPIN, a lock testing tool in the kernel of the OS detects the MAXSPIN condition. The OS performs a test to determine if the subsequent thread and address pointer lock meet the list of criteria set during initiation of the automatic lock trace method. The OS initiates an SMP lock trace capture automatically if all criteria or the arguments of the lock trace method are met. System administrators, software programmers, users or other entities interpret the results of the SMP lock tracing method that the OS stores in a trace table to determine performance improvements for the executing application program.
摘要:
A system for managing processor cycles. A set of uncapped partitions are identified that are ready-to-run in response to unused processor cycles being present in a dispatch window. A number of candidate partitions are identified from the identified set of uncapped partitions based on a history of usage where each identified partition used at least 100 percent of its entitlement in a predefined number of previous dispatch windows. Then, a partition is selected from the number of candidate partitions based on a lottery process of the candidate partitions.
摘要:
Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.
摘要:
An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
摘要:
A method, programmed medium and system are provided in which system bus traffic is moderated with real-time data. The Operating System (OS) is enabled to get information from the firmware (FW) to determine if a resource threshold has been reached. This is accomplished by generating an interrupt to flag the OS when a bus request retry rate has reached a predetermined number. The system firmware plays an integral role in this mechanism, and should be interpreted as a general term which could also include a hypervisor technology. The system firmware will report the bus request retry rate to the operating system by way of, for example, a firmware-generated interrupt. The OS may have something similar to a kernel daemon/service running to intercept the interrupt notice. In the simplest case, the daemon/service will determine if the threshold has been met based on the feedback from the firmware. If so, it will generate a system call that will moderate traffic with an operating system tunable. In one example, the number of simultaneous multithreading (SMT) threads per core will be reduced using a system call. This effectively throttles back the amount of logical threads per core and effectively alleviates the bus request saturation.
摘要:
A method, programmed medium and system are provided in which system bus traffic is moderated with real-time data. The Operating System (OS) is enabled to get information from the firmware (FW) to determine if a resource threshold has been reached. This is accomplished by generating an interrupt to flag the OS when a bus request retry rate has reached a predetermined number. The system firmware plays an integral role in this mechanism, and should be interpreted as a general term which could also include a hypervisor technology. The system firmware will report the bus request retry rate to the operating system by way of, for example, a firmware-generated interrupt. The OS may have something similar to a kernel daemon/service running to intercept the interrupt notice. In the simplest case, the daemon/service will determine if the threshold has been met based on the feedback from the firmware. If so, it will generate a system call that will moderate traffic with an operating system tunable. In one example, the number of simultaneous multithreading (SMT) threads per core will be reduced using a system call. This effectively throttles back the amount of logical threads per core and effectively alleviates the bus request saturation.
摘要:
An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
摘要:
An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.
摘要:
Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.