摘要:
An advance is made over the prior art in accordance with the principles of the present invention that is directed to a new approach for a system and method for a buffer management scheme. Certain embodiments of the invention improve the response of AQM schemes with controllable parameters to variations of the output rate of the bottleneck buffer. The impact on TCP performance can be substantial in most cases where the bottleneck rate is not guaranteed to be fixed. The new solution allows AQM schemes to achieve queue stability despite continuous variations of the bottleneck rate.
摘要:
An energy efficient connectionless routing method with simple lookup is disclosed for reducing the number of address lookups associated with a message packet. The energy efficient connectionless routing method with simple lookup includes determining a label sequence which will allow the message packet to traverse a plurality of MPLS domains and affixing the label sequence to the header of the message packet. This allows the message packet to traverse a plurality of MPLS domains without requiring a subsequent IP address lookup at every MPLS domain boundary. The energy efficient connectionless routing method with simple lookup is particularly useful for reducing power consumption associated with TCAM operations during IP address lookups. In addition, a Label Sequencing Edge Router is disclosed for performing the method.
摘要:
A scheduler and method for use in packet communication systems apply a generalized discrete-rate scheduling technique which removes the limitation of the linear increase in sorting complexity with the number of supported service rates. The set of supported service rates may be increased without increasing the number of timestamps that need to be sorted. Conversely, the generalized discrete-rate scheduler supports a given number of service rates using a smaller number of rate FIFO queues, thus further reducing complexity. Such improved performance is achieved by splitting, for scheduling purposes only, a connection or session into multiple sub-connections or sub-sessions. The technique can be applied to per-connection-timestamp and no-per-connection-timestamp discrete-rate schedulers, as well as to any other discrete-rate scheduler.
摘要:
A scheduler and method for use in ATM and packet communication systems applies a no-per-connection-timestamp discrete-rate scheduling technique which does not require the computation and storage of one timestamp per connection, and only maintains a single timestamp per supported service rate. The elimination of the per-connection timestamps has no negative effect on the delay bounds guaranteed by the scheduler. The total implementation cost of such schedulers which approximate the Generalized Processor Sharing (GPS) policy is reduced, since there is less complexity involved in maintaining and sorting the timestamps for all connections.
摘要:
A system is disclosed which services a plurality of queues associated with respective data connections such that the system guarantees data transfer rates and data transfer delays to the data connections. This is achieved by associating each connection having at least one data packet waiting in its associated queue (such a connection called a backlogged connection) with a timestamp generated as a function of system parameters including (a) the number of queues that are backlogged, (b) the data transfer rate guaranteed to each connection, (c) the sum of data transfer rates guaranteed to all backlogged connections, (d) the previous timestamp of the connection, and (e) the weighted sum of the timestamps of all backlogged connections, each timestamp weighted by the data transfer rate guaranteed to the corresponding connection. The backlogged connection associated with the timestamp having the smallest value among all of the backlogged connections is then identified and a data packet is transmitted from the queue corresponding to that connection. A new timestamp is then generated for that connection if it is still backlogged. Once the transmission of the data packet is completed, the foregoing determination of the connection with the minimum timestamp is then repeated to identify the next queue to be serviced.
摘要:
The present disclosure generally discloses a congestion control capability for use in communication systems (e.g., to provide congestion control over wireless links in wireless systems, over wireline links in wireline systems, and so forth). The congestion control capability may be configured to provide congestion control for a transport flow of a transport connection, sent from a transport flow sender to a transport flow receiver, based on flow control associated with the transport flow. The transport flow may traverse a flow queue of a link buffer of a link endpoint. The link endpoint may provide to the transport flow sender, via an off-band signaling channel, an indication of the saturation state of the flow queue of the transport flow. The transport flow sender may control transmission of packets of the transport flow based on the indication of the saturation state of the flow queue of the transport flow.
摘要:
The present disclosure generally discloses a networked transport layer socket capability. The networked transport layer socket capability, for transport layer connection of a communication device attached to a network access device, moves the transport layer connection endpoint of the transport layer connection of the communication device (which also may be referred to as a client transport layer socket of the transport layer connection of the communication device) from the communication device into the network access device. The application client at the application layer of the communication device, rather than communicating with the client transport layer socket of the transport layer of the communication device internally within the communication device, communicates with the client transport layer socket of the transport layer of the communication device, which is provided within the network access device for the communication device, via a communication path between the communication device and the network access device.
摘要:
The present disclosure generally discloses a longest queue identification mechanism. The longest queue identification mechanism, for a set of queues of a buffer, may be configured to identify the longest queue of the set of queues and determine a length of the longest queue of the set of queues. The longest queue identification mechanism may be configured to identify the longest queue of the set of queues using only two variables including a longest queue identifier (LQID) variable for the identity of the longest queue and a longest queue length (LQL) variable for the length of the longest queue. It is noted that the identity of the longest queue of the set of queues may be an estimate of the identity of the longest queue and, similarly, that the length of the longest queue of the set of queues may be an estimate of the length of the longest queue.
摘要:
A capability for controlling a size of a congestion window of an information transmission connection (ITC) is provided. The size of the congestion window of the ITC may be controlled based on a threshold, which may be based on an ideal bandwidth-delay product (IBDP) value. The IBDP value may be based on a product of an information transmission rate measure and a time measure. The information transmission rate measure may be based on a target information transmission rate for the ITC. The time measure may be based on a round-trip time measured between a sender of the ITC and a receiver of the ITC. The threshold may be a cap threshold where the size of the congestion window is prevented from exceeding the cap threshold. The threshold may be a reset threshold which may be used to control a reduction of the size of the congestion window.
摘要:
Expediting the distribution of data files between a server and a set of clients. The present invention relates to client-server systems and, more particularly, to cache nodes in client-server systems. In a client-server arrangement, a source system transfers data files from the source system to a server cache node connected to the source system. The server cache node sends a list of data files cached in the server cache node to a client cache node. The client cache node sends a request to the server cache node for new data files cached in the server cache node, based on the list received from the server cache node. The server cache node sends the requested data files to the client cache node and the client cache node transfers the data files to a destination system.