摘要:
Sharing a kernel of an operating system among logical partitions, including installing in a partition manager a kernel of a type used by a plurality of logical partitions; installing in the partition manager generic data structures specifying computer resources assigned to each of the plurality of logical partitions; and providing, by the kernel to the logical partitions, kernel services in dependence upon the generic data structures.
摘要:
A service module that provides for discovery of one or more network interfaces connecting a prospective remote procedure call (RPC) client, facilitates the provision of RPC programs in a network including multi-horned systems. When a request for a network address to an RPC application providing an RPC program is received from the RPC client, the RPC bind daemon discovers from the module, using the client response address, over which interface(s) the client is accessible. The daemon then selects an address of a network path to the RPC application that the prospective client can access and returns the corresponding network address. The service module monitors the network stack for RPC get address requests and builds tables of client address entries with corresponding network interface identifiers. The entries are retired according to an aging policy.
摘要:
A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor.
摘要:
An approach is provided that reserves a software lock for a waiting thread is presented. When a software lock is released by a first thread, a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread. After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied, added to the wait queue, and put to sleep. In addition, the reservation is cleared. After the reservation has been cleared, the lock will be granted to the next thread to request the lock.
摘要:
The present invention provides an improved method, system, and computer program product that can optimize cache utilization. In one embodiment, a kernel service creates a storage map, and sending said storage map to an application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating a cache map. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating an indication of one or more storage locations that have been allocated to store information for the application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating the storage map in response to receiving a request for the storage map from the application.
摘要:
A system for balancing component load. In response to receiving a request, data is updated to reflect a current number of pending requests. In response to analyzing the updated data, it is determined whether throttling is necessary. In response to determining that throttling is not necessary, a corresponding request to the received request is created and a flag is set in the corresponding request. Then, the corresponding request is sent to one of a plurality of lower level components of an input/output stack of an operating system for processing based on the analyzed data to balance component load in the input/output stack of the operating system.
摘要:
A method, system, device, and article of manufacture for use in a computer memory system utilizing multiple page types, for handling a memory resource request. In a accordance with the method of the invention, a request is received for allocation of pages having a first page type. The first page type has a specified allocation limit. A determination is made in response to the page allocation request of whether the number of allocated pages of the first page type exceeds or is below the allocation limit. In response to determining that the number of allocated pages of said first page type is below the allocation limit, the virtual memory manager enables allocation of pages for the request to exceed the allocation limit.
摘要:
A computer implemented method, apparatus, and computer usable program product for utilizing instruction trace registers. In one embodiment, a value in a target processor register in a plurality of processor registers is updated in response to executing an instruction associated with program code. In response to updating the value in the target processor register, an address for the instruction is copied from an instruction address register into an instruction trace register associated with the target processor register. The instruction trace register holds the address of the instruction that updated the value stored in the target processor register.
摘要:
A method, system, and program for managing memory page requests in a multi-processor data processing system determines a threshold value of available memory, and dynamically adjusts an allocation time to fulfill a page request if the available memory is below a threshold value. The allocation time to fulfill the page request is based upon a percentage of available memory pages once a page stealer commences a scan for pages. An allocation wait time is inversely proportionally adjusted depending upon the percentage of available memory. The allocation wait time has a duration that increases in time as the percentage of available memory decreases and decreases in time as the percentage of available memory increases. More specifically, an average time per page to allocate a page including a scan time for the scan in computing the average time is determined. Then a tunable value is applied to the average time to determine a wait time. In a preferred embodiment, user defined values are received that would control the allocation wait time before fulfilling a page request.
摘要:
Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register.