摘要:
A method includes a proxy device receiving from a source device a request to establish a flow to a destination device; generating, based on the request, a meta-packet that indicates that the flow to the destination device is to be proxied; determining whether a pre-established flow connecting the proxy device to another proxy device that leads toward the destination device exists; sending the meta-packet on the pre-established flow, when it is determined that the pre-established flow exists; receiving by the other proxy device, the meta-packet, and establishing the flow to the destination device based on the meta-packet, where the proxy devices assign one or more of a source address, a source port, a destination address, or a destination port, associated with the source device and the destination device, to the pre-established flow.
摘要:
A data prefetching technique uses predefined prefetching criteria and prefetching models to identify and retrieve prefetched data. A prefetching model that defines data to be prefetched via a network may be stored. It may be determined whether prefetching initiation criteria have been satisfied. Data for prefetching may be identified based on the prefetching model when the prefetching initiation criteria have been satisfied. The identified data may be prefetched, via the network, based on the prefetching model.
摘要:
A method is provided for queuing packets. A packet may be received and its flow identified. It may then be determined whether a flow queue has been assigned to the identified flow. The identified flow may be dynamically assigning to an available flow queue when it is determined that a flow queue has not been assigned to the identified flow. The packet may be enqueued into the available flow queue.
摘要:
A data communication system is provided that allows for the efficient management of data communication sessions requested from a plurality of packet data servicing nodes organized in a cluster, each member of the cluster managing a cluster session table which contains data identifying mobile units and packet data servicing nodes which are servicing data sessions with the mobile unit. As a mobile unit moves from one portion of the system to another, a network element will request a data session from a packet data servicing node, the packet data servicing node is then able to access the cluster session table to determine if the data session is already being served by another member of the cluster. If the data session is already in existence, the base station controller will be directed to request a data session from the packet data servicing node which is already servicing that session.
摘要:
Controlling congestion in a networking device having a plurality of input interface queues comprises estimating, in each of one or more sampling states, a data arrival rate for each of the plurality of input interface queues with respect to incoming data packets received on corresponding input interfaces, obtaining a set of estimated arrival rates for the plurality of the input interface queues, determining, for each polling state associated with a respective sampling state, the sequence in which the plurality of input interface queues should be polled using the set of estimated data arrival rates of the plurality of input interface queues, and polling the plurality of interface queues in accordance with the determined sequence. The sequence indicates when, during a single polling cycle, each of the input interface queues should be polled in relation to every other of the input interface queues.
摘要:
A global path identifier is assigned to each explicit route through a data communication network. The global path identifier is inserted into each packet as the packet enters a network and is used in selecting the next hop. When encountering a new selected path, an ingress router sends an explicit object to downstream nodes of the path to set up explicit routes by caching the next hop in an Explicit Forwarding Information Base (“EFIB”) table. Ingress routers maintain an Explicit Route Table (“ERT”) that tracks the global path identifier associated with each flow through the network. Multiple flows using the same path can be implemented by sharing the same global path identifier. In case of sudden network load changes, rerouting can be performed by changing the global path identifier associated with those flows that need to be rerouted and by then transmitting a new path object to downstream nodes.
摘要:
A two-phase packet processing technique is provided for routing traffic in a packet-switched, integrated services network which supports a plurality of different service classes. During Phase I, packets are retrieved from the router input interface and classified in order to identify the associated priority level of each packet and/or to determine whether a particular packet is delay-sensitive. If it is determined that a particular packet is delay-sensitive, the packet is immediately and fully processed. If, however, it is determined that the packet is not delay-sensitive, full processing of the packet is deferred and the packet is stored in an intermediate data structure. During Phase II, packets stored within the intermediate data structure are retrieved and fully processes. The technique of the present invention significantly reduces packet processing latency, particularly with respect to high priority or delay-sensitive packets. It is easily implemented in conventional routing systems, imposes little computational overhead, and consumes only a limited amount of memory resources within such systems.
摘要:
A method includes receiving a data unit, determining whether a current state, associated with a deterministic finite automata (DFA) that includes a portion of states in a bitmap and a remaining portion of states in a DFA table, is a bitmap state or not, and determining whether a value corresponding to the data unit is greater than a threshold value, when it is determined that the current state is not a bitmap state. The method further includes determining whether the current state is insensitive, when it is determined that the value corresponding to the data unit is greater than the threshold value, where insensitive means that each next state is a same state for the current state, and selecting a default state, as a next state for the current, when it is determined that the current state is insensitive.
摘要:
In general, the invention is directed to techniques for improving memory utilization in a priority queuing system of a network device. More specifically, a priority queue memory management system is described in which memory pages are assigned to the various priority queues in order to implement an efficient first in, first out (FIFO) functionality. The dynamic memory techniques described herein allow the multiple priority queues to share a common memory space. As a result, each priority queue does not require a pre-allocated amount of memory that matches the aggregate size of the packets that must be buffered by the queue.
摘要:
A resource manager 20 receives and compiles data from a plurality of base transceiver station 14 to enable an admission control decision before beginning a communication session with a mobile unit 12. The historic usage patterns of the mobile unit 12 and the historic and present bandwidth availability for cells likely to be impacted are taken into account to make the admission control decision.