摘要:
A node in a Low power and Lossy Network (LLN) is managed by monitoring a routing configuration on a node in a LLN. A triggering parameter that is used to invoke an address change on a child node is tracked and a threshold against which to compare the triggering parameter is accessed. The triggering parameter is compared to the threshold. Based on results of comparing the triggering parameter to the threshold, it is determined that an address change at the child node is appropriate. An address change of a child node appearing in the routing configuration is invoked based on the determination that an address change is appropriate.
摘要:
A multicast message may be distributed by receiving, at a first node in a multicast network, a multicast message from a parent node of the first node. The multicast message is transmitted to child nodes of the first node in the multicast network. A population of the child nodes to which the multicast message was transmitted is accessed and acknowledgement messages which reveal child nodes that are among an acknowledging subset of less than all of the child nodes of the first node are received. Child nodes revealed by the received acknowledgement messages are compared with child nodes determined to be among the population of child nodes to which the multicast message is expected to be received. Based on results of the comparison, a compressed non-acknowledging subset is identified and transmitted to the parent node.
摘要:
In one embodiment, a computer network may include nodes and at least one root node. A first subset of the nodes may be located along a designated path (a directed acyclic graph (DAG)) through the computer network to the root node, where the first subset of nodes is configured to operate according to a first wake-up timer. A second subset of the nodes that are not along the designated path are in communication to at least one node of the first subset of nodes along the designated path, and operate according to a second wake-up timer that is longer than the first wake-up timer. In this manner, second subset of nodes may be awake less often, e.g., conserving energy.
摘要:
In one embodiment, Traffic Engineering (TE) is configured on a provider edge device to customer edge device (PE-CE) link extending from a provider edge device (PE) in a provider network to a customer edge device (CE) in a customer network. TE information regarding the TE-configured PE-CE link is conveyed from the PE to one or more other nodes in the provider network. TE information regarding one or more other TE-configured PE-CE links is received from one or more other nodes. A TE database (TED) is expanded to include information for the one or more other TE-configured PE-CE links. TE is applied to a customer edge device to customer edge device (CE-CE) path using at least some of the information for the one or more other TE-configured PE-CE links included in the TED.
摘要:
Optimal automated exploration of hierarchical MPLS LSPs is disclosed. A path verification message (PVM) is transmitted from an initial router. Each label in the PVM's label stack corresponds to a hierarchy layer and is associated with a time-to-live (TTL) field. The TTL field for the label of a current layer is set so the PVM travels one hop from the initial router. In response, a reply message indicating that the PVM reached its destination is received. These steps are then repeated. For each successive PVM transmitted, the TTL field associated with a label corresponding to the current hierarchy layer is incremented. For any reply message including information describing a non-current layer, modify the next PVM's label stack and increment the TTL field of the label for the described different layer; any other TTL fields are unchanged. If any received reply message indicates a destination router was reached, the process terminates.
摘要:
A technique dynamically determines whether to reestablish a Fast Rerouted primary tunnel based on path quality feedback of a utilized backup tunnel in a computer network. According to the novel technique, a head-end node establishes a primary tunnel to a destination, and a point of local repair (PLR) node along the primary tunnel establishes a backup tunnel around one or more protected network elements of the primary tunnel, e.g., for Fast Reroute protection. Once one of the protected network elements fail, the PLR node “Fast Reroutes,” i.e., diverts, the traffic received on the primary tunnel onto the backup tunnel, and sends notification of backup tunnel path quality (e.g., with one or more metrics) to the head-end node. The head-end node then analyzes the path quality metrics of the backup tunnel to determine whether to utilize the backup tunnel or reestablish a new primary tunnel.
摘要:
In one embodiment, a sliced tunnel is signaled between a head-end node and a tail-end node. One or more fork nodes along the sliced tunnel are configured to furcate the sliced tunnel into a plurality of child tunnels of the sliced tunnel. Also, one or more merge nodes along the sliced tunnel are configured to merge a plurality of child tunnels of the sliced tunnel that intersect at the merge node.
摘要:
In one embodiment, a node “N” within a computer network utilizing directed acyclic graph (DAG) routing selects a parent node “P” within the DAG, and, where P is not a DAG root, may determine a grandparent node “GP” as a parent node to the parent node P. The node N may then also select an alternate parent node “P′” that has connectivity to GP and N. N may then inform P and P′ about prefixes reachable via N, and also about P′ as an alternate parent node to P to reach the prefixes reachable via N. Also, in one embodiment, P may be configured to inform GP about the prefixes reachable via N and also about P′ as an alternate parent node to P to reach the prefixes reachable via N, and P′ may be configured to store the prefixes reachable via N without informing other nodes about those prefixes.
摘要:
In one embodiment, egress provider edge devices (PEs) send advertisements to ingress PEs for address prefixes of a first multi-homed customer network that desires path diversity through a service provider network to a second customer network. A first ingress PE receives the advertisements, and determines whether a second ingress PE is multi-homed with the first ingress PE to the second customer network. If so, the first ingress PE computes a plurality of diverse paths within the service provider network from the first and second multi-homed ingress PEs to a corresponding egress PE. If a plurality of diverse paths exists, the first ingress PE employs one of those paths to establish a first tunnel from itself to a first egress PE, and the second ingress PE employs another of the paths to establish a second tunnel from itself to a second egress PE that is diverse from the first tunnel.
摘要:
A technique enables Traffic Engineering (TE) on paths between customer edge devices (CEs) across a provider network (“CE-CE paths”) in a computer network. According to the novel technique, TE is configured on a link from a sending provider edge device (PE) to a first CE (“PE-CE link”), e.g., a CE of one or more virtual private networks (VPNs). The sending PE conveys TE information of the PE-CE link to one or more receiving PEs in the provider network. Upon receiving the TE information, each receiving PE expands a TE database (TED) for information regarding the provider network (i.e., a “core TED”) to include TE-configured PE-CE links, e.g., by updating one or more corresponding VPN TEDs (VTEDs) for each VPN maintained by the receiving PE. Once the receiving PEs have the TE information for configured PE-CE links from the provider network, one or more TE techniques may be applied to paths from a second CE of the receiving PE to the first CE (a CE-CE path) to thereby facilitate, e.g., establishment of TE-LSPs along CE-CE paths.