摘要:
Exemplary embodiments include a system and storage medium for managing computer processing functions in a multi-processor computer environment. The system includes a physical processor, a standard logical processor, an assist logical processor sharing a same logical partition as the standard logical processor, and a single operating system instance associated with the logical partition, the single operating system instance including a switch-to service and a switch-from service. The system also includes a dispatch component managed by the single operating system instance. Upon invoking the switch-to service by standard code, the switch-to service checks to see if an assist logical processor is online and, if so, it updates an integrated assist field of a work element block associated with the task for indicating the task is eligible to be executed on the assist logical processor. The switch-to service also assigns a work queue to the work element block.
摘要:
A computer-implemented method for distributing a plurality of tasks over a plurality of processing nodes in a processor network includes the following steps: calculating a task process consumption value for the tasks; calculating a measured node processor consumption value for the nodes; calculating a target node processor consumption value for the nodes, the target node processor consumption value indicating optimal node processor consumption; calculating a load index value as a difference between the calculated node processor consumption value for a node i and the target node processor consumption value for node i; and distributing the tasks among the nodes to balance a processor workload among the nodes, according to the calculated load index value, such that the calculated load index value of each node is substantially zero. The method further embodies a multi-dimensional balancing matrix, each dimension of the matrix representing a node corresponding to a different processor type and each cell representing tasks assigned to multiple nodes.
摘要:
A method, system, and computer-usable medium for supporting debugging of host channel adapters in a logical partitioning environment. In a preferred embodiment of the present invention, a hypervisor acquires control of a trace facility and sets trace parameters for the host channel adapter. In response to determining a trace event that matches said trace parameter has been triggered, the hypervisor retrieves trace information from a buffer. In response to determining the buffer does not include any more trace information, the hypervisor determines if modification of the trace parameters is required. If the modification of the trace parameters is required, the hypervisor alters the trace parameters in anticipation of another trace event.
摘要:
A method for configuring a communication port of a communications interface of an information handling system into a plurality of virtual ports. A first command is issued to obtain information indicating a number of images of virtual ports supportable by the communications interface. A second command is then issued requesting the communications interface to virtualize the communication port. In response to the second command, one or more virtual switches are then configured to connect to the communication port, each virtual switch including a plurality of virtual ports, such that the one or more virtual switches are configured in a manner sufficient to support the number of images of virtual ports indicated by the obtained information. Thereafter, upon request via issuance of a third command, a logical link is established between one of the virtual ports of one of the virtual switches and a communicating element of the information handling system.
摘要:
Apparatus, method and program product for use in passing initiative to a processor for handling an I/O request for an I/O operation for sending data between a main storage and one or more devices. A hierarchy of vectors registers I/O requests by the devices to send or receive data from the main storage. The hierarchy of vectors has one or more lower levels and a highest level. Each device sets a vector in the lowest level of the hierarchy for registering an I/O request, the setting of a vector in the lowest level being reflected up the hierarchy to the highest level, thereby registering I/O requests on any lower level of the hierarchy in the highest level. A dispatcher polls the hierarchy in high to low order with the dispatcher passing initiative to the processor to handle I/O requests registered in said hierarchy responsive to registering of an I/O request on the lowest level as reflected to the highest level of said hierarchy.
摘要:
A queuing method and apparatus for transfer or incoming and outgoing data in a network environment having a main storage is presented. A plurality of queue sets are provided in the main storage with at least one or more sets being dedicated for input and output. The queues can share access to a plurality of devices in the network across a plurality of communication stacks. Various network resources are mapped to the queues in order to facilitate resource allocation and dynamic configuration by providing initialization of a plurality of configuration parameters. In this way dynamic expanding and contracting of the number of queues in each set as dictated by traffic patterns and feedback indicators is provided.
摘要:
In a computing network system environment having a gateway device that is electronically connected from one side to a plurality of initiating hosts and on another side to at least one local area network, in turn connecting a plurality of hosts are connected to the gateway device, a method and apparatus for eliminating any need for building a separate and special protocol data unit element for each header. The computer network environment uses a Multi-path channel communication protocol as well as protocol data units to point to various portions of data. An interface layer is provided between a plurality of protocol stacks and the multi-path channel protocol. The interface layer has a timer which will wait for data from the protocol stack. A list of all buffers are assembled as received comprising of one entry for each data buffer. This buffer list is then transferred to the multi-path channel protocol layer upon expiration of the timer, which is in turn sent to any channel attached processor as one block. A deblocker interface is also provided on all channel attached processors so that any length fields provided in the protocol headers can be used to determine offset of next data element in said block and thus eliminating need for a special header at next data element used by the communication protocol.
摘要:
A fault occurs in a virtual environment that includes a base space, a first subspace, and a second subspace, each with a virtual address associated with content in auxiliary storage memory. The fault is resolved by copying the content from auxiliary storage to central storage memory and updating one or more base space dynamic address translation (DAT) tables, and not updating DAT tables of the first and second subspace. A subsequent fault at the first subspace virtual address is resolved by copying the base space DAT table information to the first subspace DAT tables and not updating the second subspace DAT tables. A fault occurring with association to the virtual address of the first subspace is resolved for the base space and the base space DAT table information is copied to the first subspace DAT tables, and the second subspace DAT tables are not updated.
摘要:
A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup.
摘要:
A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup.