摘要:
A computer implemented method, apparatus, and computer usable code for managing cache information in a logical partitioned data processing system. A determination is made as to whether a unique identifier in a tag associated with a cache entry in a cache matches a previous unique identifier for a currently executing partition in the logical partitioned data processing system when the cache entry is selected for removal from the cache, and saves the tag in a storage device if the partition identifier in the tag matches the previous unique identifier.
摘要:
A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).
摘要:
Administration of locks for critical sections of computer programs in a computer that supports a multiplicity of logical partitions that include determining by a thread executing on a virtual processor executing in a time slice on a physical processor whether an expected lock time for a critical section of the thread exceeds a remaining entitlement of the virtual processor in the time slice and deferring acquisition of a lock if the expected lock time exceeds the remaining entitlement.
摘要:
Methods, systems, and media are disclosed for improved granularity of a response-request communication on a networked computer system. One example embodiment includes receiving the request-response communication by the networked computer system, and associating the request-response communication with a port, having a nodelay setting, from a set of ports on the networked computer system. Further, the example embodiment includes enabling, based upon the associating, the nodelay setting upon connection of the request-response communication with the port. Further still, the example embodiment includes sending, in accordance with the enabling, the request-response communication to a destination in communication with the networked computer system. In addition, further example embodiments include configuring the ports on the networked computer system with nodelay values indicating whether a particular port is assigned nodelay or no nodelay for a request portion or request portion of a request-response communication connecting to that particular port.
摘要:
Sharing a kernel of an operating system among logical partitions, including installing in a partition manager a kernel of a type used by a plurality of logical partitions; installing in the partition manager generic data structures specifying computer resources assigned to each of the plurality of logical partitions; and providing, by the kernel to the logical partitions, kernel services in dependence upon the generic data structures.
摘要:
A service module that provides for discovery of one or more network interfaces connecting a prospective remote procedure call (RPC) client, facilitates the provision of RPC programs in a network including multi-horned systems. When a request for a network address to an RPC application providing an RPC program is received from the RPC client, the RPC bind daemon discovers from the module, using the client response address, over which interface(s) the client is accessible. The daemon then selects an address of a network path to the RPC application that the prospective client can access and returns the corresponding network address. The service module monitors the network stack for RPC get address requests and builds tables of client address entries with corresponding network interface identifiers. The entries are retired according to an aging policy.
摘要:
The present invention provides an improved method, system, and computer program product that can optimize cache utilization. In one embodiment, a kernel service creates a storage map, and sending said storage map to an application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating a cache map. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating an indication of one or more storage locations that have been allocated to store information for the application. In one embodiment of the present invention, the step of the kernel service creating the storage map may further comprise the kernel service creating the storage map in response to receiving a request for the storage map from the application.
摘要:
A system for balancing component load. In response to receiving a request, data is updated to reflect a current number of pending requests. In response to analyzing the updated data, it is determined whether throttling is necessary. In response to determining that throttling is not necessary, a corresponding request to the received request is created and a flag is set in the corresponding request. Then, the corresponding request is sent to one of a plurality of lower level components of an input/output stack of an operating system for processing based on the analyzed data to balance component load in the input/output stack of the operating system.
摘要:
A method, system, device, and article of manufacture for use in a computer memory system utilizing multiple page types, for handling a memory resource request. In a accordance with the method of the invention, a request is received for allocation of pages having a first page type. The first page type has a specified allocation limit. A determination is made in response to the page allocation request of whether the number of allocated pages of the first page type exceeds or is below the allocation limit. In response to determining that the number of allocated pages of said first page type is below the allocation limit, the virtual memory manager enables allocation of pages for the request to exceed the allocation limit.
摘要:
A computer implemented method, apparatus, and computer usable program product for utilizing instruction trace registers. In one embodiment, a value in a target processor register in a plurality of processor registers is updated in response to executing an instruction associated with program code. In response to updating the value in the target processor register, an address for the instruction is copied from an instruction address register into an instruction trace register associated with the target processor register. The instruction trace register holds the address of the instruction that updated the value stored in the target processor register.