摘要:
A method, system, and computer program product in a computer-readable medium for delivering data, received from a network, to a storage buffer assigned to an application is proposed. An application designates a communication buffer within a local data processing system for buffering data communicated with an application. The local data processing system reports to a network interface of the local data processing system a memory address of the designated communication buffer, and the data processing system creates a cookie containing the memory address. The data processing system then sends the cookie form the local data processing system to a remote data processing system, such that the remote data processing system may address data directly to the designated communication buffer.
摘要:
A receiving host in a TCP/IP network sends an acknowledgment indicating a received data packet is corrupt. The sending host will begin transmitting with a new field set in the IP header called a check-TCP-checksum bit, thereby requesting that all routers in the TCP/IP network perform a checksum on the entire received packet. Routers in the TCP/IP network will perform a complete checksum on an entire packet with the check-TCP-checksum bit set, and not just on the IP header. The routers continuously monitor the ratio of corrupt packets received on a particular port that fail the entire packet checksum to the total number of packets received on that port. If the ratio of corrupt-to-received packets exceeds a corruption threshold, the router assumes that the associated link is causing data corruption and issues a routing update indicating that the link is bad and should be avoided. Once the retransmission rate between the sender and receiver drops below a threshold level, the bad link has been detected and avoided within the TCP/IP network and the check-TCP-checksum option in the IP header is no longer set in data packets transmitted to the receiver host.
摘要:
A graphic user interface of a network client (106) includes a stock ticker from a stock server (110) and a news sidebar from a news server (112) over the IP addresses advertised to the client (106) in its list of mutli-homed addresses from the video server (104) specified under Stream Control Transmission Protocol (SCTP). The client accepts real-time data from the stock exchange server and the news agency server on the multi-homed IP addresses designated in the association with the video server (104) without knowing that the data is coming from a different source than the video server (104). The real-time data feeds from the video, stock and news servers are aggregated on the client (106) with enhanced speed because the feeds come directly to the client and not via the video server. The operating systems of the home server and remote servers utilize SCTP and specialized commands to implement the enhanced speed of real-time data aggregation being streamed to network clients without requiring modifications to existing client systems.
摘要:
The present invention provides a method and apparatus for multicast tunneling for mobile devices. The method comprises receiving a multicast packet directed to a plurality of mobile nodes, the mobile nodes being associated with a home subnet and identifying if any of the plurality of the mobile nodes are coupled to a subnet other than the home subnet, wherein each of the identified mobile nodes has an associated transmission path through which that mobile node can be reached. The method further provides that in response to determining that at least some of the mobile nodes are coupled to the subnet other than the home subnet, determining which of the identified mobile nodes has a common next hop in their associated transmission path and generating a packet including at least a portion of the multicast packet and including in the packet a list of mobile nodes that have the common next hop. The method further provides for transmitting the generated packet to the common next hop.
摘要:
A system, apparatus and method of improving network data traffic between interconnected high-speed switches are provided. As is well known, when a packet of data is longer than a path maximum transmission unit (PMTU), the packet will be fragmented. In the case of the invention, the packet is fragmented by a transmitting router connected to a high-speed switch. When a receiving router, which is also connected to an high-speed switch, begins to receive the fragments, it will check to see whether its sub-network may handle data of a substantially longer length than the length of the fragments. If so, the receiving router will collect the fragments, reassemble them into the original packet and transmit the reassembled packet to its destination.
摘要:
Methods, systems, and media to sub-divide an ephemeral port range and allocate ports from the sub-divided ephemeral port ranges to facilitate communication with another destination, or target, application are contemplated. Embodiments involve a client computer system having one or more source applications. Embodiments also include hardware and/or software for categorizing transactions based upon characteristics of the transactions. Such categories correspond to categories with which sub-divisions of ephemeral port numbers are assigned. After a transaction is associated with a category, a port number selected from a pool of available port numbers in a sub-division of ephemeral port numbers assigned to that category. In many embodiments, an initial configuration is implemented via a configuration file at the startup of the client computer system. In further embodiments, assignments of ephemeral port numbers to the categories of transactions are dynamically adjusted based upon, e.g., actual usage of the port numbers.
摘要:
Methods, systems, and products are disclosed for dynamically provisioning server resources. More particularly, methods, systems, and products are disclosed for dynamically provisioning computer system resources that include monitoring a connection performance parameter of a data communications port operating in a data communications protocol having a connection backlog queue having a connection backlog queue size; and changing the connection backlog queue size in dependence upon the monitored connection performance parameter without interrupting the operation of the data communications port and without user intervention. In typical embodiments of the present invention, monitoring a connection performance parameter includes receiving a connection request and determining that the connection backlog queue is full, and changing the connection backlog queue size in dependence upon the monitored connection performance parameter includes increasing the connection backlog queue size.
摘要:
The present invention provides a method and apparatus for handling reordered data packets. A method comprises receiving a data packet and determining if the data packet is received out of order. The method further comprises delaying transmission of an acknowledgement indicating that a data packet is missing in response to determining that the data packet is received out of order.
摘要:
A method, computer program product, and data processing system for efficiently discovering and storing path MTU information in a sending host are disclosed. In a preferred embodiment, two path MTU tables are maintained. One path MTU table contains MTU values corresponding to the first-hop routers associated with the sending host. The other path MTU table contains MTU values corresponding to individual destination hosts. When the sending host needs to send information to a destination, it first consults the MTU table associated with individual destination hosts. If an entry for that destination host is found in the table, the sending host uses that MTU value. If not, the sending host consults the MTU table for the first-hop router on the path to the destination host and uses that MTU value. If that MTU value is too high, a new entry is made in the host-specific MTU table for the destination host.
摘要:
A host enables any adapter of multiple adapters of the host to concurrently support any VIPA of the multiple VIPAs assigned to the host. Responsive to a failure of at least one particular adapter from among the multiple adapters, the host triggers the remaining, functioning adapters to broadcast a separate hardware address update for each VIPA over the network, such that for a failover in the host supporting the multiple VIPAs the host directs at least one other host accessible via the network to address any new packets for the multiple VIPAs to one of the separate hardware addresses of one of the remaining adapters.