摘要:
A method and apparatus for selectively using input/output (I/O) buffers as a retransmit vehicle in a client/server system. The decision whether to use an I/O buffer as a retransmit vehicle is based on a number of factors, including the packet size, the expected round-trip time (RTT) for an acknowledgment of the transmission, the number of I/O buffers currently allocated, and the number of I/O buffers remaining. If the decision is made not to use the I/O buffer as a retransmit vehicle, then the data is copied into a send buffer that is maintained by the system for the particular requester. Initially three threshold values, the round-trip time (RTT) threshold, the critical threshold, and the tight buffer threshold, are set. Connections having a longer round-trip time than a set round-trip time threshold or connections made when the number of I/O buffers remaining is below the critical threshold are not allowed to keep the I/O buffer as a retransmission vehicle. If the number of I/O buffers remaining falls below the critical threshold, a critical stabilization interval is started. During a critical stabilization interval, the I/O buffers may not be used as a retransmit vehicle if the number of I/O buffers already allocated exceeds the tight buffer threshold, even if the number of I/O buffers remaining is above the critical threshold. For each I/O buffer, a use count is maintained of the number of packets in the buffer awaiting acknowledgment. The use count is decremented each time an acknowledgment is received for one of the packets in the I/O buffer. When the use count has been decremented to zero, the I/O buffer is freed.
摘要:
A method and apparatus for handling outgoing communication requests in an information handling system in which outgoing communication packets are accumulated into a block that is written to an input/output (I/O) device. For each I/O device there is generated a blocking factor representing a predetermined number of packets that are accumulated before the block is written to the I/O device, as well as a push interval representing a maximum period of time for which any packet in the block can be stalled. Upon the arrival of a new outgoing packet, the packet is added to the block, and the block is written to the I/O device if either the block now contains the predetermined packets or any packet in the packet has been waiting for more than the push interval. A timer running asynchronously with the arrival of outgoing requests periodically pops to write the block to the I/O device if it has been waiting overlong, even if no new requests have arrived. Both the blocking factor and the push interval are periodically adjusted in accordance with the actual throughput so that the blocking factor corresponds to the exact level of consistent parallelism for a given workload.
摘要:
A transport layer connection is established between a first system and a second system. The establishment of the transport layer connection includes identifying a remote direct memory access (RDMA) connection between the first system and the second system. After establishing to transport layer connection, the first and second systems exchange data using the RDMA connection identified in establishing the transport layer connection.
摘要:
A computer implemented program product and data processing system for receiving data to a targeted logical partition. A computer locates buffer element in reliance on a connection status bit array. The computer copies control information to the targeted logical partition's local storage. The computer updates a targeted logical partition's local producer cursor based on the control information. The computer copies data to an application receive buffer. The computer determines that an application completes a receive operation. Responsive to a determination that the application completed the receive operation, the computer a targeted logical partition's local consumer cursor to match the targeted logical partition's producer cursor.
摘要:
Address spaces are resized concurrent to accessing those address spaces. The size of an address space can be increased or decreased concurrent to performing read or write operations on the address space. Further, cache entries associated with an address space being decreased in size are purged.
摘要:
A fault occurs in a virtual environment that includes a base space, a first subspace, and a second subspace, each with a virtual address associated with content in auxiliary storage memory. The fault is resolved by copying the content from auxiliary storage to central storage memory and updating one or more base space dynamic address translation (DAT) tables, and not updating DAT tables of the first and second subspace. A subsequent fault at the first subspace virtual address is resolved by copying the base space DAT table information to the first subspace DAT tables and not updating the second subspace DAT tables. A fault occurring with association to the virtual address of the first subspace is resolved for the base space and the base space DAT table information is copied to the first subspace DAT tables, and the second subspace DAT tables are not updated.
摘要:
A computer implemented method for sharing physical memory among logical partitions. A computer reserves physical memory of a Central Electronic Complex (CEC) for communication within the CEC as a shared memory pool. The computer creates a first logical partition using resources of the CEC that are not reserved as the shared memory pool. The computer creates a second logical partition using resources of the CEC that are not reserved as the shared memory pool. The computer creates a virtual local area network (VLAN) having at least two addresses within the CEC. The computer allocates a portion of the shared memory to the VLAN as the shared memory pool.
摘要:
A system, method and computer program product for providing a shared memory translation facility. The method includes receiving a request for access to a memory address from a requestor at a configuration, the receiving at a shared memory translation mechanism. It is determined if the memory address refers to a shared memory object (SMO), the SMO accessible by a plurality of configurations. In response to determining that the memory address refers to the SMO, it is determined if the configuration has access to the SMO. In response to determining that the configuration has access to the SMO, the requestor is provided a system absolute address for the SMO and access to the SMO. In this manner direct interchange of data between the plurality of configurations is allowed.
摘要:
A system, method and computer program product for providing quiesce filtering for shared memory. The method includes receiving a shared-memory quiesce request at a processor. The request includes a donor zone. The processor includes translation look aside buffer one (TLB1). It is determined that the shared-memory request can be filtered by the processor if there not any shared memory entries in the TLB1 and the donor zone is not equal to a current zone of the processor and the processor is not running in host mode. The shared-memory quiesce request is filtered in response to the determining.
摘要:
A method and apparatus of operating a central processing unit (CPU) including a plurality of processors, is provided and includes collecting real-time statistics relating to the processors during dispatching activities, identifying give-help processors from the real-time statistics when the real-time statistics indicate that one or more of the nodes is overworked, and implementing help to be provided by the give-help processor to relieve the overworked node of a portion of the work to be distributed thereto.