摘要:
A device in a server having a processor and a storage. The device has a protocol blind network path indication unit configured to obtain an indicator corresponding to a predetermined path to a data communication unit in the network using a destination address of a received data packet, an upstream communication unit configured to transmit a network protocol blind packet including the data packet and the indicator corresponding to the predetermined data path to the data communication unit in the network, a combiner configured to bind the indicator to the data packet received by the downstream communication unit, and a protocol blind correlation storage unit configured to provide information related to target addresses and indicators corresponding to a plurality of predetermined data paths in the network. The protocol blind network path indication unit obtains the indicator corresponding to a predetermined path by accessing the protocol blind correlation structure.
摘要:
A method for processing network traffic in a modular switching device that includes a source device, a target device, and a plurality of connecting devices, includes generating a communication unit at the source device, where the communication unit is associated with a unique communication unit identifier, and where the communication unit is to be transmitted to the target device; dividing the communication unit into a plurality of transmission units, including assigning a respective position identifier to each of the plurality of transmission units, where the position identifier is indicative of a position of the transmission unit within the communication unit, and assigning the communication unit identifier to each of the plurality of transmission units; the method further comprising causing the plurality of transmission units to be transmitted in parallel to respective ones of the plurality of connecting devices, where each of the plurality of connecting devices connects the source device to the target device.
摘要:
A method of configuring a plurality of aggregation queues for aggregating multicast network traffic includes configuring a first one of the plurality of aggregation queues to store at least data units associated with a first multicast group (MCG) and data units associated with a second MCG, and configuring a second one of the plurality of aggregation queues to store only those data units that are associated with a third MCG.
摘要:
A device in a server having a processor and a storage. The device has a protocol blind network path indication unit configured to obtain an indicator corresponding to a predetermined path to a data communication unit in the network using a destination address of a received data packet, an upstream communication unit configured to transmit a network protocol blind packet including the data packet and the indicator corresponding to the predetermined data path to the data communication unit in the network, a combiner configured to bind the indicator to the data packet received by the downstream communication unit, and a protocol blind correlation storage unit configured to provide information related to target addresses and indicators corresponding to a plurality of predetermined data paths in the network. The protocol blind network path indication unit obtains the indicator corresponding to a predetermined path by accessing the protocol blind correlation structure.
摘要:
Aspects of the disclosure provide a network device. The network device includes a first port coupled to a first device to communicate with the first device, and a clock wander compensation module. The first port recovers a first clock based on first signals received from the first device. The clock wander compensation module includes a global counter configured to count system clock cycles based on a system clock of the network device, and a first port counter configured to count first clock cycles based on the recovered first clock. Further, the first port transmits a first pause frame to the first device based on the global counter and the first port counter.
摘要:
In a method for storing packets in a network device, a memory space spanning a plurality of external memory devices is partitioned into a plurality of multi-buffers. Each multi-buffer spans multiple memory devices in the plurality of external memory devices. Each multi-buffer is partitioned into a plurality of buffer chunks and the plurality of buffer chunks are distributed among the multiple memory devices Further, a packet is divided into one or more packet chunks including at least a first packet chunk. The one or more packet chunks are stored in one or more consecutive buffer chunks of at least a first multi-buffer of the plurality of multi-buffers.
摘要:
A method of controlling a plurality of forwarding databases provided in an Ethernet bridge having a plurality of devices. The method includes aging a first set of entries in a first forwarding database maintained by a first one of the plurality of devices. The first set of entries are owned by the first one of the plurality of devices. The method also includes transmitting one or more new address messages from the first one of the plurality of devices to a second one of the plurality of devices. The method further includes aging a second set of entries in the first forwarding database. The second set of entries are owned by the second one of the plurality of devices.
摘要:
An embodiment of the present invention reduces certain memory bandwidth requirements when sending a multicast message from a network device such as a router, bridge or switch. Separate output buffers are provided for different groups of egress ports, and incoming messages are written to some or all of the output buffers. A processing determination is made as to which egress ports will forward the message. Buffers associated with non-forwarding ports are released and the message is queued at the forwarding egress ports. When the message is forwarded, data is read from the output buffers associated with the forwarding egress ports.
摘要:
Resources allocated to a group of ports include a plurality of storage regions. Each storage region includes a committed area and a shared area. A destination storage region is identified for a packet. A packet queuing engine stores the packet in the committed area of the determined destination storage region if it has a first drop precedence value, and if available storage space in the committed area exceeds a first threshold. The packet queuing engine stores the packet in the shared area of the determined destination storage region if the packet is not stored in the committed area, and if available storage space exceeds a second threshold defined by the packet's drop precedence value. If the packet is not stored either in the committed or shared area, it may be dropped.
摘要:
A flow classifier for a network device that processes packets including packet headers includes a hash generator that generates hash index values from search keys derived from the packet headers. A hash table receives the hash index values and outputs pointers. A flow table includes flow keys and corresponding actions. A variable length (VL) trie data structure uses the pointers to locate the flow keys for the search keys. The VL trie data structure selects different flow keys for the search keys that share a common hash index value. The pointers include node, NIL and leaf pointers. The flow classifier performs a default action for the NIL pointers. A pointer calculator accesses a VL trie table using the pointers.