摘要:
A method and system for internal data loop back in a packet switch is provided. In some instances, the switch may be required to process multiple layers of a header within the data packet, such as when data is transferred over the network encapsulated with a TCP header at the Transport Layer to form a TCP packet, then encapsulated with an IP header at the Network Layer to form an IP packet, then encapsulated with one or more MPLS headers to form a MPLS packet, and then encapsulated with an Ethernet header at the Link Layer to form an Ethernet packet. In such an instance, the data packet can be iteratively processed by the packet switch using an internal loop back technique. An internal loop back may be accomplished by using a header providing internal routing instructions resulting in the data packet being routed directly from an egress queue back to an ingress queue whereupon the lower levels of the header can be processed.
摘要:
Obtaining packet forwarding data for routing packets. The steps may include (1) receiving packet identification information including a virtual router identifier (VRID) and route data; (2) determining if the VRID of the received packet identification information belongs to a pre-defined set of VRIDs. Additionally, if the VRID of the received packet identification information belongs to the pre-defined set of VRIDs, then the method preferably performs the steps of: (1) converting the VRID into a shortened VRID; and (2) obtaining packet forwarding data by performing a ternary content addressable memory (TCAM) lookup using a short key. But if the VRID of the received packet identification information does not belong to the pre-defined set of VRIDs, then the method performs the step of obtaining packet forwarding data by performing a ternary content addressable memory (TCAM) lookup using a long key.
摘要:
Congestion is controlled in an internetwork having at least two segments coupled by a router where at least one connection between communication devices passes through the router. Each connection is assumed to use a window-based flow control protocol between its source and destination. On receiving an acknowledgment from a connection in the router, where the acknowledgment contains a window size set by the destination, the router adaptively determines a second window size for the connection based on the router's average buffer occupancy and its instantaneous buffer occupancy. If the window size in the acknowledgment exceeds this second window size, the window size in the acknowledgment is overwritten to select the second window size. The router then forwards the acknowledgment to the source, thereby controlling the window size available to the source as a function of the congestion in the router.
摘要:
A parallel-prefix modulo 2n−1 adder that is as fast as the fastest parallel prefix 2n integer adders, does not require an extra level of logic to generate the carry values, and has a very regular structure to which pipeline registers can easily be added. All nodes of the adder have a fanout ≦2. In the prefix structure of the adder, each carry value term output by the parallel prefix structure is determined by the all of the bits in the operands input to the adder. In one embodiment, there are log2 n stages in the prefix structure. Each stage has n logical operators, and all of the logical operators in the prefix structure are of the same kind. Pipeline registers may be inserted before and/or after a stage in the prefix structure.
摘要:
A platform for seamlessly hosts a plurality of disparate types of packet processing applications. One or more applications are loaded onto a service card on the platform. A programmable path structure is included that maps a logical path for processing of the packets through one or more of the plurality of service cards according to characteristics of the packets. Multiple path structures may be programmed into the platform to offer different service paths for different types of packets.
摘要:
The invention includes a method and apparatus for detecting and suppressing echo in a packet network. A method according to one embodiment includes extracting voice coding parameters from packets of a reference packet stream, extracting voice coding parameters from packets of a target packet stream, determining whether voice content of the target packet stream is similar to voice content of the reference packet stream by processing the voice coding parameters of the reference packet stream and the voice coding parameters of the target packet stream, and determining whether the target packet stream includes an echo of the reference packet stream based on the determination as to whether the voice content of the target packet stream is similar to voice content of the reference packet stream.
摘要:
A platform for seamlessly hosts a plurality of disparate types of packet processing applications. One or more applications are loaded onto a service card on the platform. A programmable path structure is included that maps a logical path for processing of the packets through one or more of the plurality of service cards according to characteristics of the packets. Multiple path structures may be programmed into the platform to offer different service paths for different types of packets.
摘要:
A method is disclosed for rate allocation within the individual switches of a communications network implementing a rate-based congestion control approach for best-effort traffic. The methodology of the invention centers on a new rate allocation algorithm which performs its allocation functions independently of the number of connections sharing a network link and therefore performs an allocation in .THETA.(1) complexity. With that implementation simplicity, the algorithm is particularly advantageous for implementation in ATM switches carrying a large number of virtual channels. The algorithm operates on bandwidth information supplied from the source of a connection in special cells or packet headers, such as ATM Resource Management cells. By storing parameter values for other connections sharing a network link, the algorithm requires a constant number of simple computations for each request from a connection for a bandwidth allocation. The algorithm is asynchronous and distributed in nature and converges to the max-min fairness allocation.
摘要:
A platform for seamlessly hosts a plurality of disparate types of packet processing applications. One or more applications are loaded onto a service card on the platform. A programmable path structure is included that maps a logical path for processing of the packets through one or more of the plurality of service cards according to characteristics of the packets. Multiple path structures may be programmed into the platform to offer different service paths for different types of packets.
摘要:
A method for rate allocation within the individual switches of a communication network implementing a rate-based congestion control approach for best-effort traffic. The method enables a guaranteed minimum bandwidth to be allocated to each communication session or connection, in addition to fairly dividing the available bandwidth among the competing connections. The method also estimates the transmission rate of each connection, instead of relying on explicit indications provided by the connection, and uses this information in the computation of its fair share of bandwidth. The method also calculates the available bandwidth on the link where the bandwidth is allocated, by measuring the utilization of the link periodically. Finally, the method incorporates a mechanism to detect connections that remain idle and withdraw allocations from them so as to avoid under-utilization of the link bandwidth.