Abstract:
A network processing device having multiple processing engines capable of providing multi-context parallel processing is disclosed. The device includes a receiver and a packet processor, wherein the receiver is capable of receiving packets at a predefined packet flow rate. The packet processor, in one embodiment, includes multiple processing engines, wherein each processing engine is further configured to include multiple context processing components. The context processing components are used to provide multi-context parallel processing to increase throughput.
Abstract:
Multiple frames of SDH framed data are received. Each frame has an overhead portion and a payload portion. The payload portions of multiple frames are identified and extracted. These payloads are switched and re-mapped to a different STM structure as required.
Abstract:
A network system, having an array of processing engines (“PEs”) and a delay line, improves packet processing performance for time division multiplexing (“TDM”) sequencing of PEs. The system includes an ingress circuit, a delay line, a demultiplexer, a tag memory, and a multiplexer. After the ingress circuit receives a packet from an input port, the delay line stores the packet together with a unique tag value. The delay line, in one embodiment, provides a predefined time delay for the packet. Once the demultiplexer forwards the packet to an array of PEs for packet processing, a tag memory stores the tag value indexed by PE number. The PE number identifies a PE in the array, which was assigned to process the packet. The multiplexer is capable of multiplex packets from PE array and replacing the packet with the processed packet in the delay line in response to the tag value.
Abstract:
An apparatus and method for using a direct memory access (“DMA”) to facilitate netflow statistics are disclosed. A network device such as a router or a switch, in one embodiment, includes a statistic component, a local memory, and a memory access controller. The statistic component is configured to gather information relating to net usage from packet flows or netflows in response to corresponding index values or tags. While the local memory such as a cache provides the index values or tags assignable to packet flows, the memory access controller such as a DMA transfers at least a portion of the index values or tags between the local memory and a main memory for enhancing capacity of the local memory.
Abstract:
A network device having a system performance measurement unit employing one or more global time stamps for measuring the device performance is disclosed. The device includes an ingress circuit, a global time counter, an egress circuit, and a processor. The ingress circuit is configured to receive a packet from an input port while the global time counter generates an arrival time stamp in accordance with the arrival time of the packet. The egress circuit is capable of forwarding the packet to other network devices via an output port. The processor, in one embodiment, is configured to calculate packet latency in response to the arrival time stamp.
Abstract:
An apparatus and a method for enhancing digital processing implementation using non-power-of-two even count Gray coding are disclosed. The even count encoding device includes a first circuit, a second circuit, and a coding circuit. The first circuit, in one embodiment, is configured to identify a first portion of entries in a table in response to an input number. The second circuit is capable of determining a second portion of entries in the table in response to the input number, wherein the number of the first portion of entries and the number of the second portion of the entries are substantially the same. The coding circuit is operable to concatenate the second portion of the entries to the first portion of the entries to form an output table, which includes a sequence of even count integers wherein the difference between two adjacent integers is one bit position.
Abstract:
An apparatus and a method for load balancing across multiple routes using an indirection table and hash function during a process of packet classification are disclosed. A network device such as a router includes a memory, a hash component, and a result memory. The memory is referred to as an indirection random access memory (“RAM”), is capable of storing information regarding number of paths from source devices to destination devices. The memory, in one embodiment, provides a base index value and a range number indicating the number of paths associated with the base index value. The hash component generates a hash index in response to the base index value and the range number. Upon generation of hash index, the result memory identifies a classification result in response to the hash index.
Abstract:
A network processing device having multiple processing engines capable of providing multi-context parallel processing is disclosed. The device includes a receiver and a packet processor, wherein the receiver is capable of receiving packets at a predefined packet flow rate. The packet processor, in one embodiment, includes multiple processing engines, wherein each processing engine is further configured to include multiple context processing components. The context processing components are used to provide multi-context parallel processing to increase throughput.
Abstract:
A processing system includes a group of processing units (“PUs”) arranged in a daisy chain configuration or a sequence capable of parallel processing. The processing system, in one embodiment, includes PUs, a demultiplexer (“demux”), and a multiplexer (“mux”). The PUs are connected or linked in a sequence or a daisy chain configuration wherein a first PU is located at the beginning of the sequence and a last digital PU is located at the end of the sequence. Each PU is configured to read an input data packet from a packet stream during a designated reading time frame. If the time frame is outside of the designated reading time frame, a PU allows a packet stream to pass through. The demux forwards a packet stream to the first digital processing unit. The mux receives a packet steam from the last digital processing unit.
Abstract:
An apparatus and method of using a cache to improve a learn rate for a content-addressable memory (“CAM”) are disclosed. A network device such as a router or a switch, in one embodiment, includes a key generator, a searching circuit, and a key cache, wherein the key generator is capable of generating a first lookup key in response to a first packet. The searching circuit is configured to search the content of the CAM to match the first lookup key. If the first lookup key is not found in the CAM, the key cache stores the first lookup key in response to a first miss.