摘要:
Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource.
摘要:
Two or more processors that each provides a specified thread to access a shared resource that can only be accessed by one thread at a given time. A locking mechanism enables one of the threads to access the shared resource while other threads are retained in a waiting queue. Responsive to an additional thread that is not one of the specified threads being provided access the shared resource during an identified time period, and responsive to a first criterion an a second criterion being met, the additional thread accesses the shared resource before the other threads in the waiting queue.
摘要:
An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory.
摘要:
A software thread is dispatched for causing the system to poll a device for determining whether a condition has occurred. Subsequently, the software thread is undispatched and, in response thereto, an interrupt is enabled on the device, so that the device is enabled to generate the interrupt in response to an occurrence of the condition, and so that the system ceases polling the device for determining whether the condition has occurred. Eventually, the software thread is redispatched and, in response thereto, the interrupt is disabled on the device, so that the system resumes polling the device for determining whether the condition has occurred.
摘要:
A mechanism is provided for biasing placement of a software thread on a currently idle and dispatched processor. The operating system starts with the last logical processor on which the software thread ran and determines whether that processor is idle and dispatched and considers each logical processor until a currently dispatched and idle logical processor is found. If a currently dispatched and idle logical processor is not found, then the operating system biases placing the software thread on an idle logical processor.
摘要:
The present invention provides a computer implemented method, data processing system, and computer program product for mapping and dispatching virtual processors in a data processing system having at least a first partition and a second partition. The data processing system runs a first partition on a virtual processor during a first timeslice. The data processing system identifies an at least one physical page used by the first partition and the second partition. The data processing system maps the at least one physical page to the first partition and the second partition. The data processing system determines a fitness value based on the mapping. The data processing system dispatches the Virtual processor to the second partition on a second timeslice based on the fitness value, wherein the second timeslice immediately succeeds after the first timeslice, whereby the at least one physical page remains in cache during at least the first timeslice and the second timeslice.
摘要:
A mechanism is provided for determining whether to use cache affinity as a criterion for software thread dispatching in a shared processor logical partitioning data processing system. The server firmware may store data about when and/or how often logical processors are dispatched. Given these data, the operating system may collect metrics. Using the logical processor metrics, the operating system may determine whether cache affinity is likely to provide a significant performance benefit relative to the cost of dispatching a particular logical processor to the operating system.
摘要:
A computer implemented method, data processing system, and computer usable program code are provided for dispatching virtual processors. A determination is made as to whether a physical processor in a set of physical processors is idle, and, if so, a determination is made as to whether an affinity map for the idle physical processor exists. Responsive to an existence of the affinity map, a determination is made as to whether a virtual processor last mapped to the idle physical processor is ready to run using the affinity map and a dispatch algorithm. Responsive to identifying a ready-to-run virtual processor whose affinity map indicates that the idle physical processor is mapped to this virtual processor in its preceding dispatch, the ready-to-run virtual processor is dispatched to the idle physical processor. Thus, memory affinity is maintained between physical and virtual processors when the memory affinity is not expired.
摘要:
A mechanism is provided for biasing placement of a software thread on a currently idle and dispatched processor. The operating system starts with the last logical processor on which the software thread ran and determines whether that processor is idle and dispatched and considers each logical processor until a currently dispatched and idle logical processor is found. If a currently dispatched and idle logical processor is not found, then the operating system biases placing the software thread on an idle logical processor.
摘要:
Interrupt frequency control by estimating processor load in the peripheral adapter provides adaptive interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load is then used to set a parameter that controls the frequency of an interrupt generator, which may be controlled by setting an interrupt queue depth threshold, packet frequency threshold or interrupt hold-off time value. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths.