摘要:
A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).
摘要:
Administration of locks for critical sections of computer programs in a computer that supports a multiplicity of logical partitions that include determining by a thread executing on a virtual processor executing in a time slice on a physical processor whether an expected lock time for a critical section of the thread exceeds a remaining entitlement of the virtual processor in the time slice and deferring acquisition of a lock if the expected lock time exceeds the remaining entitlement.
摘要:
Methods, systems, and media are disclosed for improved granularity of a response-request communication on a networked computer system. One example embodiment includes receiving the request-response communication by the networked computer system, and associating the request-response communication with a port, having a nodelay setting, from a set of ports on the networked computer system. Further, the example embodiment includes enabling, based upon the associating, the nodelay setting upon connection of the request-response communication with the port. Further still, the example embodiment includes sending, in accordance with the enabling, the request-response communication to a destination in communication with the networked computer system. In addition, further example embodiments include configuring the ports on the networked computer system with nodelay values indicating whether a particular port is assigned nodelay or no nodelay for a request portion or request portion of a request-response communication connecting to that particular port.
摘要:
A system, apparatus and method of reducing adverse performance impact due to migration of processes from one processor to another in a multi-processor system are provided. When a process is executing, the number of cycles it takes to fetch each instruction (CPI) of the process is stored. After execution of the process, an average CPI is computed and stored in a storage device that is associated with the process. When a run queue of the multi-processor system is empty, a process may be chosen from the run queue that has the most processes awaiting execution to migrate to the empty run queue. The chosen process is the process that has the highest average number of CPIs.
摘要:
Scheduling threads in a multi-processor computer system including establishing an interrupt threshold for a thread, where the interrupt threshold represents a maximum permissible number of interrupts during thread execution on a processor; executing the thread on a current processor, where the thread has thread affinity for one or more processors including the current processor; counting a number of interrupts during execution of the thread on the current processor; and removing thread affinity for the current processor in dependence upon the counted number of interrupts and the interrupt threshold.
摘要:
A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.
摘要:
A method, computer program product, and a data processing system for processing transactions of a client-server application is provided. A first data set is transmitted from a client to a server. A second data set to be transmitted to the server is received by the client. An evaluation is made to determine whether transmission of the second data set is blocked until receipt of an acknowledgment of the first data set. A number of allowable outstanding acknowledgements is increased responsive to determining that the second data set is blocked from transmission.
摘要:
A system and method for identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as CPI, that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.
摘要:
A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.
摘要:
A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.