摘要:
In a computing system, an application thread is executed on a hardware thread. Based on a configuration of the computing system, a first threshold is determined comprising a threshold percentage of execution time spent servicing a set of interrupts to the application thread relative to a total execution time for the hardware thread. For the hardware thread, a length of a first time period spent servicing an interrupt in the set of interrupts and a length of a second time period spent executing the application thread are measured. A cumulative percentage of execution time spent in the first time period relative to execution time spent in the first time period and the second time period is calculated. Responsive to the cumulative percentage being above the threshold percentage, interrupt servicing on the hardware thread is disabled.
摘要:
Examples of techniques for hardware thread switching for scheduling policy in a processor are described herein. An aspect includes, based on receiving a request from a first software thread to dispatch to a first hardware thread, determining that the first hardware thread is occupied by a second software thread that has a higher priority than the first software thread. Another aspect includes issuing an interrupt to switch the second software thread from the first hardware thread to a second hardware thread. Another aspect includes, based on switching of the second software thread from the first hardware thread to the second hardware thread, dispatching the first software thread to the first hardware thread.
摘要:
A process for processor management includes activating a delay thread running on a processor. A determination is made whether a wait event for a first thread running on the processor is in a queue. Responsive to determining that the wait event for the first thread is in the queue, a determination is made whether a wait time associated with the wait event has expired. Responsive to determining that the wait time has not expired, a determination is made if wait time exceeds a threshold. Responsive to determining that the wait time exceeds the threshold, a timer is set and a low power mode is initiated for the processor.
摘要:
A method for filtering multiple in-memory trace buffers for event ranges is provided. The method includes allocating a plurality of main trace buffers, based on the number of central processing units (CPU) participating in a trace. Each CPU has a dedicated main trace buffer, and each main trace buffer is circular. Each main trace buffer is divided into an equal number of sub-buffers. A plurality of events is written to the current sub-buffer. When the current sub-buffer is filled, events are written to the next sub-buffer. Events are extracted from at least one of the sub-buffers, starting with the sub-buffer that includes a compare time and ending at the end of the main trace buffer.
摘要:
Various systems, processes, and products may be used to manage a processor. In particular implementations, managing a processor may include the ability to determine whether a thread is pausing for a short period of time and place a wait event for the thread in a queue based on a short thread pause occurring. Managing a processor may also include the ability to activate a delay thread that determines whether a wait time associated with the pause has expired and remove the wait event from the queue based on the wait time having expired.
摘要:
A computer-implemented method selectively adjusts a resources addresses cache of addresses of resources used by virtual processors. A first dispatch from a hypervisor dispatches a first virtual processor, and then tracks processes executed by the first virtual processor. The hypervisor caches cache addresses of resources used by the processes after the first dispatch in a resources addresses cache. The hypervisor undispatches the first virtual processor, and then redispatches the first virtual processor as a second virtual processor by issuing a second dispatch. Processes executed by the second virtual processor are compared to processes executed during by the first virtual processor, thus leading to an identification of a level of process utilization consistency. The hypervisor then adjusts the resources addresses cache by selectively clearing resource addresses based on the level of process utilization consistency.
摘要:
A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.
摘要:
A method for filtering multiple in-memory trace buffers for event ranges is provided. The method includes allocating a plurality of main trace buffers, based on the number of central processing units (CPU) participating in a trace. Each CPU has a dedicated main trace buffer, and each main trace buffer is circular. Each main trace buffer is divided into an equal number of sub-buffers. A plurality of events is written to the current sub-buffer. When the current sub-buffer is filled, events are written to the next sub-buffer. Events are extracted from at least one of the sub-buffers, starting with the sub-buffer that includes a compare time and ending at the end of the main trace buffer.
摘要:
A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.
摘要:
A system and technique for adaptive lock list searching of waiting threads includes logic executable by a processor to: determine an average service time for a lock associated with a shared computing resource; determine an average search time for selecting a thread to next receive the lock from a plurality of threads waiting for the lock; sum the average service time and the average search time; apply a search factor to the summed average service time and average search time to obtain a target search time for searching the waiting threads for selecting the next thread for obtaining the lock; determine a quantity of waiting threads to consider for next obtaining the lock based on the target search time and the average search time, the quantity being less than a total quantity of waiting threads; and identify the next thread to obtain the lock from the quantity.