摘要:
A dynamically-enforceable application-controlled quasi-reliable extension to TCP permits a client application to dynamically set a percent loss tolerance for data transmission reliability through network input/output system calls to the TCP, thereby programming the transport layer to optimistically acknowledge non-critical missing frames. The reliability requirement can be dynamically set within TCP to the level of reliability required for specific data frames within the data stream during the data transfer. Based on this loss tolerance specified, the TCP layer makes a determination whether to trigger a retransmission or continue delivering out-of-order frames to the application. A forced acknowledgement frame is sent for each missing packet until the number missing packets causing forced acknowledgments within the current receive buffer frame exceeds the loss tolerance. This process avoids needless retransmissions and permits the TCP data flow and sliding window to advance uninterrupted, thereby providing substantial performance benefits to network throughput.
摘要:
The server side Transfer Control Protocol is modified so that the server returns a SYNACK message with the window size equal to zero if the server is busy. When a client sends a TCP connection request and receives a synchronization acknowledgement message with the window size equal to zero, the client knows that the server received the connection request and that the server is busy. The client may then send an acknowledgement message to complete the three-way synchronization handshake, thus successfully completing the connection. Thereafter, the client side TCP may probe the server side TCP until a window update message is received from the server. When the server sends a window update message to set the window size to a non-zero size, the client knows that the server is no longer busy and the client application may then use the TCP connection.
摘要:
Responsive to detecting a need for a mobile device to transfer out of a first network, requests are sent from the mobile device to a communication endpoint in mSCTP. The first request is to stop transmissions to a first address of said mobile device. The second request is to add an intermediary address of a mobility support service designated for receiving any communications already in transmission when the first request is sent. The communication link for the mobile device is then transitioned from the current address at the first network to a second address at a second network. The first network and the second network are non-intersecting networks. The mobile device then indicates to the mobility support service that the handover from the first network to the second network is complete. The mobility support service responds to the completion by sending a third request in mSCTP to the communication endpoint to continue communication with the mobile client at the second address.
摘要:
A method, system, and program for monitoring thread usage to dynamically control a thread pool are provided. An application running on the server system invokes a listener thread on a listener socket for receiving client requests at the server system and passing the client requests to one of multiple threads waiting in a thread pool. Additionally, the application sends an ioctl call in blocking mode on the listener thread. A TCP layer within the server system detects the listener thread in blocking mode and monitors a thread count of at least one of a number of incoming requests waiting to be processed and a number of said plurality of threads remaining idle in the thread pool over a sample period. Once the TCP layer detects a thread usage event, the ioctl call is returned indicating the thread usage event with the thread count, such that a number of threads in the thread pool may be dynamically adjusted to handle the thread count.
摘要:
Data communications through a split connection proxy in a data communications protocol, including receiving in a proxy from a client, asynchronously with respect to any other messages between the client and the proxy, one or more client messages including client message data items including a connection request for a connection between the client and the proxy, destination connection data identifying a destination server, and a message from the client to the destination server; and sending from the proxy to the server, asynchronously with respect to any messages between the client and the proxy and asynchronously with respect to any other messages between the proxy and the server, one or more proxy messages including proxy message data items including a connection request for a connection between the proxy and the destination server and the message from the client to the destination server.
摘要:
A method, system, and program for monitoring thread usage to dynamically control a thread pool are provided. An application running on the server system invokes a listener thread on a listener socket for receiving client requests at the server system and passing the client requests to one of multiple threads waiting in a thread pool. Additionally, the application sends an ioctl call in blocking mode on the listener thread. A TCP layer within the server system detects the listener thread in blocking mode and monitors a thread count of at least one of a number of incoming requests waiting to be processed and a number of said plurality of threads remaining idle in the thread pool over a sample period. Once the TCP layer detects a thread usage event, the ioctl call is returned indicating the thread usage event with the thread count, such that a number of threads in the thread pool may be dynamically adjusted to handle the thread count.
摘要:
Determining availability of a destination for computer network communications that include providing on a caching device a destination availability cache comprising at least one cache entry representing availability of a destination and providing, from the caching device to a source, through computer network communications, information indicating the availability of the destination. In typical embodiments, the cache entry comprises a network address of a destination device and a time limitation for the cache entry.
摘要:
Determining availability of a destination for computer network communications that include providing on a caching device a destination availability cache comprising at least one cache entry representing availability of a destination and providing, from the caching device to a source, through computer network communications, information indicating the availability of the destination. In typical embodiments, the cache entry comprises a network address of a destination device and a time limitation for the cache entry.
摘要:
A method, system, and computer program product for optimizing a message size for communication in a communication network are disclosed. The method comprises identifying a connection to a target, sending to a path maximum transmission unit value server (which is not the target) a request for a path maximum transmission unit value for the connection to the target, and, in response to receiving the path maximum transmission unit value for the connection to the target from the server, optimizing a communication by sending to the target a packet having a size in accordance with the value.
摘要:
TCP congestion avoidance is implemented upon retransmission of a packet and is reverted back to the original congestion state upon receipt of an early acknowledgement (ACK), indicating reordering of packets, thereby eliminating a needless restriction on TCP bandwidth. Upon receiving an ACK to a retransmitted packet, it is determined if the ACK resulted from receipt of the original reordered packet or the retransmitted packet, based on the arrival time of the ACK at the sender. If the round-trip-time (RTT) for the retransmitted packet is much lower than the average or current calculated RTT for the network link between sender and receiver, then the retransmission occurred as a result of a reordering event, and the congestion window is restored back to its value prior to the retransmission, thereby permitting the network link to continue operating at its original increased throughput.