摘要:
Scheduling threads in a multi-processor computer system including establishing an interrupt threshold for a thread, where the interrupt threshold represents a maximum permissible number of interrupts during thread execution on a processor; executing the thread on a current processor, where the thread has thread affinity for one or more processors including the current processor; counting a number of interrupts during execution of the thread on the current processor; and removing thread affinity for the current processor in dependence upon the counted number of interrupts and the interrupt threshold.
摘要:
A system and method for identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as cycles per instruction (CPI), that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.
摘要:
Methods, systems, and media are disclosed for improved granularity of a response-request communication on a networked computer system. One example embodiment includes receiving the request-response communication by the networked computer system, and associating the request-response communication with a port, having a nodelay setting, from a set of ports on the networked computer system. Further, the example embodiment includes enabling, based upon the associating, the nodelay setting upon connection of the request-response communication with the port. Further still, the example embodiment includes sending, in accordance with the enabling, the request-response communication to a destination in communication with the networked computer system. In addition, further example embodiments include configuring the ports on the networked computer system with nodelay values indicating whether a particular port is assigned nodelay or no nodelay for a request portion or request portion of a request-response communication connecting to that particular port.
摘要:
A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).
摘要:
The present invention provides an improved method, system, and computer program product that can optimize cache utilization. In one embodiment, a kernel service creates a storage map, and sending said storage map to an application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating a cache map. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating an indication of one or more storage locations that have been allocated to store information for the application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating the storage map in response to receiving a request for the storage map from the application.
摘要:
A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor.
摘要:
A method of allocating resources in a data processing system is disclosed. The method includes an application designing a page reallocation scheme and sending said page reallocation scheme from said application to a kernel service that is responsible for allocation of storage locations.
摘要:
A system and method is provided for delaying a priority boost of an execution thread. When a thread prepares to enter a critical section of code, such as when the thread utilizes a shared system resource, a user mode accessible data area is updated indicating that the thread is in a critical section and, if the kernel receives a preemption event, the priority boost that the thread should receive. If the kernel receives a preemption event before the thread finishes the critical section, the kernel applies the priority boost on behalf of the thread. Often, the thread will finish the critical section without having to have its priority actually boosted. If the thread does receive an actual priority boost then, after the critical section is finished, the kernel resets the thread's priority to a normal level.
摘要:
Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register.
摘要:
Disclosed is a computer implemented method, computer program product, and apparatus for maintaining a preselect list. The method comprises software components detecting a page fault of a memory page. In response to detecting a page fault, the software components determine whether the memory page is referenced in the preselect list and unhide the memory page. Upon determining whether the memory page is referenced in the preselect list, the software components remove an entry of the preselect list corresponding to the memory page to form at least one removed candidate page and skip paging-out of the at least one removed candidate page.