摘要:
A plurality of equal cost paths through a network from a source node to a destination node are determined. A maximum bandwidth capacity for each link of each of the plurality of equal cost paths is determined, and a smallest capacity link for each of the plurality of equal cost paths is determined from the maximum capacity bandwidths for each link. An aggregated maximum bandwidth from the source node to the destination node is determined by aggregating the smallest capacity links for each of the plurality of equal cost paths. Traffic is sent from the source node along each of the plurality of equal cost paths according to a value of a capacity for the smallest capacity link for each of the plurality of equal cost paths, wherein a total of the sent traffic does not exceed the aggregated maximum bandwidth.
摘要:
Various embodiments are disclosed for increasing Layer-3 LPM (longest prefix match) routing database in a network platform. In some embodiments, chipsets in fabric modules (FMs) can be partitioned into multiple banks. Network traffic can be directed towards a corresponding bank in the FMs by using a LPM table on a line card (LC). Entries in the LPM table on the LC can be programmed either statically or dynamically based upon LPM routes that are dynamically learned.
摘要:
In one embodiment an approach is provided to efficiently program routes on line cards and fabric modules in a modular router to avoid hot spots and thus avoid undesirable packet loss. Each fabric module includes two separate processors or application specific integrated circuits (ASICs). In another embodiment, each fabric module processor is replaced by a pair of fabric module processors arranged in series with each other, and each processor is responsible for routing only, e.g., IPv4 or IPv6 traffic. The pair of fabric module processors communicates with one another via a trunk line and any packet received at either one of the pair is passed to the other of the pair before being passed back to a line card.
摘要:
Various embodiments are disclosed for increasing Layer-3 LPM (longest prefix match) routing database in a network platform. In some embodiments, chipsets in fabric modules (FMs) can be partitioned into multiple banks. Network traffic can be directed towards a corresponding bank in the FMs by using a LPM table on a line card (LC). Entries in the LPM table on the LC can be programmed either statically or dynamically based upon LPM routes that are dynamically learned.
摘要:
An example method is provided and includes a multicast data message from a data source, the message in a first virtual local area network and being associated with a multicast group. The method also includes calculating a hash value based on the virtual local area network, the data source, and the multicast group, determining a port for a designated router in a Layer-2 network based on the hash value, and switching the multicast data message to the port that was determined.
摘要:
Systems and methods may be provided embodying an optimized Trill LAN network hello mode. The optimized hello mode may allow the number of LAN hellos exchanged to be reduced significantly in a steady state mode of operation. No modifications to the current Trill specification are needed and in a converged state (when designated RBridge election and appointed forwarder appointments are complete), only 1 hello PDU per RBridge is originated in every hello interval.
摘要:
Systems and methods may be provided embodying an optimized Trill LAN network hello mode. The optimized hello mode may allow the number of LAN hellos exchanged to be reduced significantly in a steady state mode of operation. No modifications to the current Trill specification are needed and in a converged state (when designated RBridge election and appointed forwarder appointments are complete), only 1 hello PDU per RBridge is originated in every hello interval.
摘要:
In one embodiment, a method includes receiving information on layer 2 topologies at a network device in a core network, mapping one or more Virtual Local Area Networks (VLANs) to the layer 2 topologies to provide differentiated services in said layer 2 topologies, defining multiple paths for each of the layer 2 topologies, and forwarding a packet received at the network device on one of the multiple paths. An apparatus for providing differentiated services in layer 2 topologies is also disclosed.