摘要:
Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for lock contention reduction. In one illustrative embodiment, the computer implemented method provides a lock to an active thread, increments a lock counter, receives a request to de-schedule the active thread, and determines whether the lock is held by the active thread. The computer implemented method, responsive to a determination that the lock is held by the active thread, adds a first pre-determined amount to a time slice of the active thread.
摘要:
Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for lock contention reduction. In one illustrative embodiment, the computer implemented method provides a lock to an active thread, increments a lock counter, receives a request to de-schedule the active thread, and determines whether the lock is held by the active thread. The computer implemented method, responsive to a determination that the lock is held by the active thread, adds a first pre-determined amount to a time slice of the active thread.
摘要:
Partition migrations are scheduled between virtual partitions of a virtually partitioned data processing system. The virtually partitioned data processing system is a tickless system in which a periodic timer interrupt is not guaranteed to be sent to the processor at a defined time interval. A request is received for a partition migration. Gaps between scheduled timer interrupts are identified. The partition migration is then scheduled to occur within the largest gap.
摘要:
A computer implemented method, apparatus, and computer program product for preserving branch history data. The process creates a branch history table in a buffer. The process saves an address for each executed branch instruction that occurs during execution of code in the branch history table to form branch history data. In response to detecting an exception, the process saves the branch history data to an allocated memory space to form a branch history snapshot.
摘要:
Partition migrations are scheduled between virtual partitions of a virtually partitioned data processing system. The virtually partitioned data processing system is a tickless system in which a periodic timer interrupt is not guaranteed to be sent to the processor at a defined time interval. A request is received for a partition migration. Gaps between scheduled timer interrupts are identified. The partition migration is then scheduled to occur within the largest gap.
摘要:
A computer implemented method, data processing system, and computer program product for dynamically scheduling algorithms in a pipeline which operate on a stream of data. The illustrative embodiments determine a computational cost of each algorithm in a plurality of algorithms in a pipeline. The plurality of algorithms in the pipeline processes an incoming data stream in a first sequential algorithm order. The illustrative embodiments reorder the plurality of algorithms in the pipeline to form a second sequential algorithm order based on the computational cost of each algorithm. The plurality of algorithms may then be executed in the second sequential algorithm order. When the illustrative embodiments assign a spare processing unit to an algorithm at an end of the pipeline, the computational cost of each algorithm in the plurality of algorithms in the pipeline is redetermined.
摘要:
A computer implemented method, data processing system, and computer program product for reducing memory traffic via detection and tracking of temporally silent stores. When a memory store, comprising an address and a data value, to a cache is detected, a determination is made that a cache line in the cache contains a same address as the address in the memory store. A determination is then made that a tentative cache line invalidate signal for the cache line was previously sent to other data processing systems in the network to tentatively invalidate the cache line. If the memory store is a temporally silent store, a cache line revalidate signal is sent to the other data processing systems to clear the tentative invalidate signal for the cache line.
摘要:
A computer implemented method, data processing system, and computer program product for dynamically scheduling algorithms in a pipeline which operate on a stream of data. The illustrative embodiments determine a computational cost of each algorithm in a plurality of algorithms in a pipeline. The plurality of algorithms in the pipeline processes an incoming data stream in a first sequential algorithm order. The illustrative embodiments reorder the plurality of algorithms in the pipeline to form a second sequential algorithm order based on the computational cost of each algorithm. The plurality of algorithms may then be executed in the second sequential algorithm order. When the illustrative embodiments assign a spare processing unit to an algorithm at an end of the pipeline, the computational cost of each algorithm in the plurality of algorithms in the pipeline is redetermined.
摘要:
A computer implemented method, data processing system, and computer program product for reducing memory traffic via detection and tracking of temporally silent stores. When a memory store, comprising an address and a data value, to a cache is detected, a determination is made that a cache line in the cache contains a same address as the address in the memory store. A determination is then made that a tentative cache line invalidate signal for the cache line was previously sent to other data processing systems in the network to tentatively invalidate the cache line. If the memory store is a temporally silent store, a cache line revalidate signal is sent to the other data processing systems to clear the tentative invalidate signal for the cache line.
摘要:
A method, system, and computer usable program product for energy conservation in multipath data communications are provided in the illustrative embodiments. A current utilization of each of several of I/O devices is determined. A violation determination is made whether an I/O device from the several I/O devices can be powered down without violating a rule. The I/O device is powered down responsive to the violation determination being false. A powering up determination may be made whether an additional I/O device is needed in a multipath I/O configuration. The I/O device may be located, powered up, and made available for multipath I/O configuration. A latency determination may be made whether a latency time of the I/O device can elapse before the time when the additional I/O device is needed. The powering on may occur no later than the latency time before the time the additional I/O device is needed.