摘要:
A method, system, and program for managing memory page requests in a multi-processor data processing system determines a threshold value of available memory, and dynamically adjusts an allocation time to fulfill a page request if the available memory is below a threshold value. The allocation time to fulfill the page request is based upon a percentage of available memory pages once a page stealer commences a scan for pages. An allocation wait time is inversely proportionally adjusted depending upon the percentage of available memory. The allocation wait time has a duration that increases in time as the percentage of available memory decreases and decreases in time as the percentage of available memory increases. More specifically, an average time per page to allocate a page including a scan time for the scan in computing the average time is determined. Then a tunable value is applied to the average time to determine a wait time. In a preferred embodiment, user defined values are received that would control the allocation wait time before fulfilling a page request.
摘要:
A method is provided for a data processing system configured to include multiple logical partitions, wherein resources of the system are selectively allocated among respective partitions. In the method, an entity such as a Partition Load Manager or a separate background process is used to manage resources based on locality levels. The method includes the step of evaluating the allocation of resources to each of the partitions at a particular time, in order to select a partition having at least one resource considered to be of low desirability due to its level of locality with respect to the selected partition. The method further comprises identifying each of the other partitions that has a resource matching the resource of low desirability, and determining the overall benefit to the system that would result from trading the resource of low desirability for the matching resource of each of the identified partitions. Resources are reallocated to trade the resource of low desirability for the matching resource of the identified partition that is determined to provide the greatest overall benefit for the system, provided that at least some overall system benefit will result from the reallocation.
摘要:
A method, system, and program for managing memory page requests in a multi-processor data processing system determines a threshold value of available memory, and dynamically adjusts an allocation time to fulfill a page request if the available memory is below a threshold value. The allocation time to fulfill the page request is based upon a percentage of available memory pages once a page stealer commences a scan for pages. An allocation wait time is inversely proportionally adjusted depending upon the percentage of available memory. The allocation wait time has a duration that increases in time as the percentage of available memory decreases and decreases in time as the percentage of available memory increases. More specifically, an average time per page to allocate a page including a scan time for the scan in computing the average time is determined. Then a tunable value is applied to the average time to determine a wait time. In a preferred embodiment, user defined values are received that would control the allocation wait time before fulfilling a page request.
摘要:
Methods, systems, and products are disclosed for user defined preferred DNS routing that include mapping for a user in a data communications application a domain name of a network host to a network address for a preferred DNS server, wherein the preferred DNS server has a network address for the domain name; receiving from the user a request for access to a resource accessible through the network host; and routing to the preferred DNS server a DNS request for the network address of the network host, the DNS request including the domain name of the network host. In typical embodiments, mapping a domain name to a network address for a preferred DNS server is carried out by storing, through the data communication application, the domain name in association with the network address for a preferred DNS server in a data structure in computer memory.
摘要:
A computer implemented method, apparatus, and computer usable program product for utilizing instruction trace registers. In one embodiment, a value in a target processor register in a plurality of processor registers is updated in response to executing an instruction associated with program code. In response to updating the value in the target processor register, an address for the instruction is copied from an instruction address register into an instruction trace register associated with the target processor register. The instruction trace register holds the address of the instruction that updated the value stored in the target processor register.
摘要:
Identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as cycles per instruction (CPI), that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.
摘要:
A system and method is altering the priority of a process, or thread of execution, when the process acquires a software lock. The priority is altered when the lock is acquired and restored when the process releases the lock. Thread priorities can be altered for every lock being managed by the operating system or can selectively be altered. In addition, the amount of alteration can be individually adjusted so that a process that acquires one lock receive a different priority boost than a process that acquires a different lock. Furthermore, a method of tuning a computer system by adjusting lock priority values is provided.
摘要:
A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.
摘要:
An approach is provided that reserves a software lock for a waiting thread is presented. When a software lock is released by a first thread, a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread. After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied, added to the wait queue, and put to sleep. In addition, the reservation is cleared. After the reservation has been cleared, the lock will be granted to the next thread to request the lock.
摘要:
A system and method is provided for delaying a priority boost of an execution thread. When a thread prepares to enter a critical section of code, such as when the thread utilizes a shared system resource, a user mode accessible data area is updated indicating that the thread is in a critical section and, if the kernel receives a preemption event, the priority boost that the thread should receive. If the kernel receives a preemption event before the thread finishes the critical section, the kernel applies the priority boost on behalf of the thread. Often, the thread will finish the critical section without having to have its priority actually boosted. If the thread does receive an actual priority boost then, after the critical section is finished, the kernel resets the thread's priority to a normal level.