摘要:
Differentiated services for network traffic using weighted quality of service is provided. Network traffic is queued into separate per flow queues, and traffic is scheduled from the per flow queues into a group queue. Congestion management is performed on traffic in the group queue. Traffic is marked with priority values, and congestion management is performed based on the priority values. For example, traffic can be marked as “in contract” if it is within a contractual limit, and marked as “out of contract” if it is not within the contractual limit. Marking can also include classifying incoming traffic based on Differentiated Service Code Point. Higher priority traffic can be scheduled from the per flow queues in a strict priority over lower priority traffic. The lower priority traffic can be scheduled in a round robin manner.
摘要:
A network node or corresponding method of performing link aggregation reduces a number of Content Addressable Memory (CAM) entries required to make a forwarding decision for a given ingress flow, reducing cost, size, and power consumption of the CAM and accompanying static RAM. In one embodiment, an ingress flow is mapped to an egress flow identifier. Subsequently, the egress flow identifier is mapped to a member of an aggregated group associated with an egress interface based on information available in a given ingress flow. Finally, the given ingress flow is forwarded to the member of the aggregated group associated with the egress interface. A hashing technique or two lookups may be used alone or in combination in mapping the ingress flow to the egress flow identifier to reduce CAM memory usage.
摘要:
A method or apparatus in an exemplary embodiment supports first and second layer network nodes that may be configured to communicate with each other via a communications path. In embodiments a first network node communicates via a second network node across a layer 2 network to send data. The first network node is typically on a different link layer protocol than the second network node receiving the data. Thus, the first and second layer network nodes, having different link layer protocols, may communicate with each other. Accordingly, through use of embodiments of this invention, Neighbor Discovery (ND) is possible in a network, such as an IPv6 network, that has protocols incompatible with each other.
摘要:
A method or apparatus in an exemplary embodiment supports first and second layer network nodes that may be configured to communicate with each other via a communications path. In embodiments a first network node communicates via a second network node across a layer 2 network to send data. The first network node is typically on a different link layer protocol than the second network node receiving the data. Thus, the first and second layer network nodes, having different link layer protocols, may communicate with each other. Accordingly, through use of embodiments of this invention, Neighbor Discovery (ND) is possible in a network, such as an IPv6 network, that has protocols incompatible with each other.
摘要:
A network node or corresponding method of performing link aggregation reduces a number of Content Addressable Memory (CAM) entries required to make a forwarding decision for a given ingress flow, reducing cost, size, and power consumption of the CAM and accompanying static RAM. In one embodiment, an ingress flow is mapped to an egress flow identifier. Subsequently, the egress flow identifier is mapped to a member of an aggregated group associated with an egress interface based on information available in a given ingress flow. Finally, the given ingress flow is forwarded to the member of the aggregated group associated with the egress interface. A hashing technique or two lookups may be used alone or in combination in mapping the ingress flow to the egress flow identifier to reduce CAM memory usage.