摘要:
Mechanisms for reducing the idle time of a computing device due to delays in transmitting/receiving acknowledgement packets are provided. A first data amount corresponding to a window size for a communication connection is determined. A second data amount, in excess of the first data amount, which may be transmitted with the first data amount, is calculated. The first and second data amounts are then transmitted from the sender to the receiver. The first data amount is provided to the receiver in a receive buffer of the receiver. The second data amount is maintained in a switch port buffer of a switch port without being provided to the receive buffer. The second data amount is transmitted from the switch port buffer to the receive buffer in response to the switch port detecting an acknowledgement packet from the receiver.
摘要:
Mechanisms for reducing the idle time of a computing device due to delays in transmitting/receiving acknowledgement packets are provided. A first data amount corresponding to a window size for a communication connection is determined. A second data amount, in excess of the first data amount, which may be transmitted with the first data amount, is calculated. The first and second data amounts are then transmitted from the sender to the receiver. The first data amount is provided to the receiver in a receive buffer of the receiver. The second data amount is maintained in a switch port buffer of a switch port without being provided to the receive buffer. The second data amount is transmitted from the switch port buffer to the receive buffer in response to the switch port detecting an acknowledgement packet from the receiver.
摘要:
The method determines whether a particular jumbo data packet benefits from fragmentation and reassembly management during communication through a network or networks. The method determines the best communication path, including path partners, between a sending information handling system (IHS) and a receiving IHS for the jumbo packet. A packet manager determines the maximum transmission unit (MTU) size for each path partner or switch in the communication path including the sending and receiving IHSs. The method provides transfer of the jumbo packets intact between those path partner switches of the communication path exhibiting MTU sized for jumbo or larger packet transfer. The method provides fragmentation of jumbo packets into multiple normal packets for transfer between switches exhibiting normal packet MTU sizes. The packet manager reassembles multiple normal packets back into jumbo packets for those network devices, including the receiving IHS, capable of managing jumbo packets.
摘要:
Expediting adapter failover may minimize network downtime and preserve network performance. Embodiments may comprise copying a primary adapter memory of a failing primary adapter to a standby adapter memory of a standby adapter. Copying the memory may expedite TCP/IP offload adapter failover by maintaining TCP/IP stack and connection information. In several embodiments, Copy Logic may copy primary adapter memory to standby adapter memory. In some embodiments, Detect Logic may monitor primary adapter viability and may initiate failover. In additional embodiments, Assess Logic may assess whether the IO bus is operative permitting Direct Logic to copy adapter memory via, e.g., DMA. In other embodiments, Packet Logic may fragment primary adapter memory into network packets sent through the network to the standby adapter where Unpack Logic may unpack them into memory.
摘要:
A method and system for substantially avoiding loss of data and enabling continuing connection to the application during an MTU size changing operation in an active network computing device. Logic is added to the device driver, which logic provides several enhancements to the MTU size changing operation/process. Among these enhancements are: (1) logic for temporarily pausing the data coming in from the linked partner while changing the MTU size; (2) logic for returning a “device busy” status to higher-protocol transmit requests during the MTU size changing process. This second logic prevents the application from issuing new requests until the busy signal is removed; and (3) logic for enabling resumption of both flows when the MTU size change is completed. With this new logic, the device driver/adapter does not have any transmit and receive packets to process for a short period of time, while the MTU size change is ongoing.
摘要:
A method and system for substantially avoiding loss of data and enabling continuing connection to the application during an MTU size changing operation in an active network computing device. Logic is added to the device driver, which logic provides several enhancements to the MTU size changing operation/process. Among these enhancements are: (1) logic for temporarily pausing the data coming in from the linked partner while changing the MTU size; (2) logic for returning a “device busy” status to higher-protocol transmit requests during the MTU size changing process. This second logic prevents the application from issuing new requests until the busy signal is removed; and (3) logic for enabling resumption of both flows when the MTU size change is completed. With this new logic, the device driver/adapter does not have any transmit and receive packets to process for a short period of time, while the MTU size change is ongoing.
摘要:
A method, system and computer program product for eliminating the latency in searching for contiguous memory space by an IO DMA request of a device driver. Three new application programming interfaces (APIs) are provided within the operating system (OS) code that allows the device driver(s) to (1) pre-request and pre-allocate the IO DMA address range from the OS during the IPL and maintain control of the address, (2) map a system (virtual/physical) address range to a specific pre-allocated IO DMA address range, and (3) free the pre-allocated IO DMA address space back to the kernel when the space is no longer required. Utilizing these APIs enables advanced IO DMA address mapping techniques maintained by the device drivers, and the assigned/allocated IO DMA address space is no longer fragmented, and the latency of completing the IO DMA mapping is substantially reduced/eliminated.
摘要:
A method and system for substantially avoiding loss of data and enabling continuing connection to the application during an MTU size changing operation in an active network computing device. Logic is added to the device driver, which logic provides several enhancements to the MTU size changing operation/process. Among these enhancements are: (1) logic for temporarily pausing the data coming in from the linked partner while changing the MTU size; (2) logic for returning a “device busy” status to higher-protocol transmit requests during the MTU size changing process. This second logic prevents the application from issuing new requests until the busy signal is removed; and (3) logic for enabling resumption of both flows when the MTU size change is completed. With this new logic, the device driver/adapter does not have any transmit and receive packets to process for a short period of time, while the MTU size change is ongoing.
摘要:
A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.
摘要:
Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented.