摘要:
A computer implemented method for receiving data from a sender across a network connection for the data transfer. An expected size for a congestion window for the sender is identified. An amount of the data received from the sender is tracked. An acknowledgment is sent in response to the amount of data received from the sender meet in the expected size of the congestion window for the sender.
摘要:
A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.
摘要:
A system and method is provided that reserves a software lock for a waiting thread is presented. When a software lock is released by a first thread, a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread. After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied, added to the wait queue, and put to sleep. In addition, the reservation is cleared. After the reservation has been cleared, the lock will be granted to the next thread to request the lock.
摘要:
A system and method for implementing a fast file synchronization in a data processing system. A memory management unit divides a file stored in system memory into a collection of data block groups. In response to a master (e.g., processing unit, peripheral, etc.) modifying a first data block group among the collection of data block groups, the memory management unit writes a first block group number associated with the first data block group to system memory. In response to a master modifying a second data block group, the memory management unit writes the first data block group to a hard disk drive and writes a second data block group number associated with the second data block group to system memory. In response to a request to update modified data block groups of the file stored in the system memory to the hard disk drive, the memory management unit writes the second data block to the hard disk drive.
摘要:
A method, apparatus, processor, system, and signal-bearing medium that in an embodiment determine which page to replace in memory when the memory is full based on reference and re-reference indicators in page table entries. In an embodiment, a reference indicator in an entry is set when its associated page is accessed in memory and the reference indicator was previously clear. The re-reference indicator in an entry is set when its associated page is accessed and the reference indicator was previously set. Both the reference and re-reference indicators are cleared if their associated page is accessed and both were previously set. When a new page is accessed and the memory is full, a page in the memory is not available for replacement if both its reference and its re-reference indicators are set. Otherwise, the page is available for replacement.
摘要:
A method, computer program product, and a data processing system for queuing threads among a plurality of processors in a multiple processor system having a plurality of multi-processor modules is provided. A first thread to be processed is received and is identified as part of an existing process. A search for an idle processor is performed. The search is restricted to processors of a first multi-processor module associated with the existing process.
摘要:
A system and method is provided for delaying a priority boost of an execution thread. When a thread prepares to enter a critical section of code, such as when the thread utilizes a shared system resource, a user mode accessible data area is updated indicating that the thread is in a critical section and, if the kernel receives a preemption event, the priority boost that the thread should receive. If the kernel receives a preemption event before the thread finishes the critical section, the kernel applies the priority boost on behalf of the thread. Often, the thread will finish the critical section without having to have its priority actually boosted. If the thread does receive an actual priority boost then, after the critical section is finished, the kernel resets the thread's priority to a normal level.