摘要:
A method to resequence packets includes sequentially allocating in each source ingress adapter a packet rank to each packet received within each source ingress adapter. In each destination egress adapter, each ranked data packet is stored at a respective buffer address of an egress buffer. The respective buffer addresses of data packets received by a same source ingress adapter with a same priority level and switched through a same switching plane are linked in a same linked-list. The respective buffer addresses are preferably linked by their order of use in the egress buffer, and thus each linked-list is having a head list pointing to the oldest buffer address. The linked-lists are sorted into subsets including those linked-lists linking the respective buffer addresses of data packets received by a same source ingress adapter with a same priority level. For each subset of linked-lists, the packet ranks of the data packets stored at the buffer address pointed by the head lists of each linked-list of each subset are compared to determine the next data packet to be put in a sequence.
摘要:
A queue scheduling mechanism in a data packet transmission system, the data packet transmission system including a transmission device for transmitting data packets, a reception device for receiving the data packets, a set of queue devices respectively associated with a set of priorities each defined by a priority rank for storing each data packet transmitted by the transmission device into the queue device corresponding to its priority rank, and a queue scheduler for reading, at each packet cycle, a packet in one of the queue devices determined by a normal priority preemption algorithm. The queue scheduling mechanism includes a credit device that provides at each packet cycle a value N defining the priority rank to be considered by the queue scheduler whereby a data packet is read by the queue scheduler from the queue device corresponding to the priority N instead of the queue device determined by the normal priority preemption algorithm.
摘要:
A queue scheduling mechanism in a data packet transmission system, the data packet transmission system including a transmission device for transmitting data packets, a reception device for receiving the data packets, a set of queue devices respectively associated with a set of priorities each defined by a priority rank for storing each data packet transmitted by the transmission device into the queue device corresponding to its priority rank and a queue scheduler for reading, at each packet cycle, a packet in one of the queue devices determined by a normal priority preemption algorithm. The queue scheduling mechanism includes a credit device that provides, at each packet cycle, a value N defining the priority rank to be read by the queue scheduler from the queue device corresponding to the priority N instead of the queue device determined by the normal priority preemption algorithm. The queue scheduling mechanism further includes an exhaustive priority register that registers the value of at least one exhaustive priority rank to be read by the queue scheduler from the queue device corresponding to the exhaustive priority rank rather than from the queue device corresponding to the priority N.
摘要:
A data switch is provided which routes fixed-size data packets from input ports to output ports, using shared memory which holds a copy of each packet in buffers. Output ports have a queue which contains pointers to buffers holding packets bound for that port. The number of shared memory buffers holding packets is compared to the number of buffer pointers in the output queues. In this way, a Multicast Index (MCI), a metric of the level of multicast traffic, is derived. The switch includes a Switch Core Adaptation Layer (SCAL) which has a multicast input queue. Because traffic is handled based on priority class P, a multicast threshold MCT(P), associated with the multicast input queue, is established per priority. While receiving traffic, the MCI is updated and, for each priority class in each SCAL, the MCI is compared to the MCT(P) to determine whether corresponding multicast traffic must be held.
摘要:
Techniques are provided for hash-based routing table management in a distributed network switch. A frame having a source address and a destination address is received. If no routing entry for the source address is found in a routing table of a switch module in the distributed network switch, where the routing table is divided into slices of buckets, then routing information is determined for the source address and a routing entry is generated. The routing table is modified to include the routing entry and based on a set of hash functions and properties of the slices.
摘要:
A multi-module switching system comprising at least two switching modules adapted for receiving data packets from at least one input adapter and transmitting the data packets to at least one output adapter, each of the switching modules including a shared buffer for buffering a portion of a data packet received from an input adapter and transmitting the portion to an output adapter. One of the switching modules is a master module receiving a portion of a data packet containing a packet header and sending control information contained therein serially to each other switching module as a slave module. Each slave module includes a delay computing structure adapted for computing a delay needed to transmit the control information from the master module to this slave module and a first storing structure adapted for storing a portion of a data packet transmitted from an input adapter to the slave module during the delay, before transmitting the portion to a respective shared buffer such that the portion of data packet is not received by the shared buffer before the slave module has received the control information from the master module.
摘要:
A network processor dataflow chip and method for flexible dataflow are provided. The dataflow chip comprises a plurality of on-chip data transmission and scheduling circuit structures. The data transmission and scheduling circuit structures are selected responsive to indicators. Data transmission circuit structures may comprise selectable frame processing and data transmission functions. Selectable frame processing may comprise cut and paste, full dispatch and store and dispatch frame processing. Scheduling functions include full internal scheduling, calendar scheduling in communication with an external scheduler, and external calendar scheduling. In another aspect of the present invention, data transmission functions may comprise low latency and normal latency external processor interfaces for selectively providing privileged access to dataflow chip resources.
摘要:
A line-adapter of a communications controller includes, for scanning the teleprocessing lines connected to it, cyclic scanning means FES exchanging information with the lines through a serial bidirectional link on which data and control informations are partitioned into frames and slots. Since both the FES and the serial link work with their own timings, an interface FESA is provided to adapt the FES scanning to the serial link structure. This FESA includes temporary storage means for storing on the one hand, data and control information transmitted from the LICs to the FES (10) through the inbound serial link, and on the other hand, data and control information transmitted from the FES to the LICs through the outbound serial link. The access of the FES, the outbound and inbound serial link to the storage means is time-shared and granted by an arbitration logic, according to the relative priorities of operation of said elements within the line-adapter of the communications controller.
摘要:
Techniques are provided for routing table synchronization for a distributed network switch. In one embodiment, a first frame having a source address and a destination address is received. If no routing entry for the source address is found in a routing table of a first switch module, routing information is determined for the source address and a routing entry is generated. An indication is sent to a second switch module, to request a routing entry for the source address to be generated in the second switch module, based on the routing information.
摘要:
Techniques are provided for cached routing table management in a distributed network switch. A frame having a source address and a destination address is received. If no routing entry for the source address is found in a routing table of a switch module in the distributed network switch, then routing information is determined for the source address and a routing entry is generated. The routing table is modified to include the routing entry, based on a set of hash functions. Upon accessing the generated routing entry in the modified routing table responsive to a subsequent lookup request for the source address, the set of caches is modified to include the generated routing entry.