摘要:
In one embodiment, a replacement network communications path is determined using dedicated resources of an existing path. One or more network elements in a network determines a new communications path between a first network node and a second network node in the network while an existing communications path is currently configured in the network to carry traffic between the first and second network nodes. The existing communications path includes one or more exclusive physical resources dedicated to the existing communications path. The new communications path includes at least one of said exclusive physical resources dedicated to the existing communications path. One embodiment includes: subsequent to said determining the new communications path, removing the existing communications path from service, and then instantiating the new communications path, with the new communications path including said at least one of said exclusive physical resources.
摘要:
In one embodiment, an apparatus in a network determines particular metadata to communicate infrastructure information associated with a particular packet to another apparatus in the network. The apparatus sends into the network the particular packet including a metadata channel, comprising said particular metadata, external to the payload of the particular packet. Examples of infrastructure metadata carried in a packet include, but are not limited to, information defining service chaining for processing of the packet, contextual information for processing of the packet, specific handling instructions of the packet, and operations, maintenance, administration (OAM) instrumentation of the packet.
摘要:
In one embodiment, once activation of use of a backup tunnel is detected for a primary tunnel, then a level of congestion of the path of the backup tunnel may be determined. In response to the level being greater than a threshold, a head-end node of the primary tunnel is triggered to reroute the primary tunnel (e.g., requesting to a path computation element). Conversely, in response to the level not being greater than the threshold, the backup tunnel is allowed to remain activated.
摘要:
In one embodiment, a list of border node next hop options is maintained in a memory. The list of border node next hop options includes one or more of border nodes that may be utilized to reach one or more prefixes. An index value is associated with each border node of the list of border node next hop options. A list of labels is also maintained in the memory. The index value of each border node is associated with a corresponding label for a path to reach that border node. When a change to the one or more border nodes is detected, the list of border node next hop options is updated to remove a border node. However, a label for the path to reach the border node is maintained in the list of labels for at least a period of time.
摘要:
In one embodiment, a device (e.g., a path computation element, PCE) monitors a tunnel set-up failure rate within a computer network, and determines whether to adjust an accuracy of routing information based on the tunnel set-up failure rate. For instance, the tunnel set-up failure rate being above a first threshold indicates a need for greater accuracy. In response to the tunnel set-up failure rate being above the first threshold, the device may then instruct one or more routers to shorten their routing update interval in the computer network.
摘要:
In one embodiment, an edge device of a core network may receive a plurality of packets from a peripheral network having a plurality of active connections to the core network, where each packet has a destination address and a source address. The edge device may compute a hash on the destination address or the source address of each packet, and determine whether the computed hash corresponds to the edge device. In response to the computed hash not corresponding to the edge device, the edge device may drop the packet, and in response to the computed hash corresponding to the edge device, the edge device may process the packet to forward the packet, where the dropping and processing load balances the plurality of packets over the active connections and prevents formation of loops in the core network.
摘要:
A technique dynamically resizes Traffic Engineering (TE) Label Switched Paths (LSPs) at a head-end node of the TE-LSPs in preparation to receive redirected traffic in response to an event in a computer network. The novel dynamic TE-LSP resizing technique is based on the detection of an event in the network that could cause traffic destined for one or more other (“remote”) head-end nodes of one or more TE-LSPs to be redirected to an event-detecting (“local”) head-end node of one or more TE-LSPs. An example of such a traffic redirection event is failure of a remote head-end node or failure of any of its TE-LSPs. Specifically, the local head-end node maintains TE-LSP steady state sampling and resizing frequencies to adapt the bandwidth of its TE-LSP(s) to gradual changes in the network over time. Upon detection of an event identifying possible traffic redirection, the local head-end node enters a Fast Resize (FR) state, in which the sampling and resizing frequencies are increased to quickly adapt the TE-LSP bandwidth(s) to any received redirected traffic.
摘要:
Diversity constraints with respect to connections or links in a client layer are conveyed to a server layer where those links or connections are served by paths in the server layer. A network device in the server layer stores data associated paths in the server layer with identifiers for connections in the client layer. The network device in the server layer receives from a network device in the client layer a request to set up a path in the server layer for a connection in the client layer. The network device in the server layer receives information describing the diversity requirements associated with connections in the client layer. The server layer network device computes a route in the server layer for the connection specified in the request based on the diversity requirements.
摘要:
In one embodiment, a connectivity verification protocol (CVP) session for a particular virtual interface (VI) may operate on a particular group of two or more line cards (LCs) on a network device. The group of LCs may then transmit CVP session packets, at a reduced rate that is sufficient to maintain the CVP session based on a negotiated CVP full rate, onto the particular VI through ingress path processing on the network device. Ingress path processing, in particular, takes transmitted CVP session packets and egresses them onto an appropriate LC of the network device currently responsible for the VI egress. Also, in response to receiving CVP session packets for the VI on an LC of the network device currently responsible for the VI ingress, the receiving LC may forward the received CVP session packets to the particular corresponding group of LCs, which may then process the received CVP session packets.
摘要:
In one embodiment, an inter-domain routing protocol stores an inter-domain routing protocol route having an associated next-hop address. A routing table is searched for an for an intra-domain routing protocol route that may be used to reach the next-hop address of the inter-domain routing protocol route. Such route is marked as an important route for convergence. Later, in response to a change in the network requiring a routing table update, the intra-domain routing protocol route marked as an important route for convergence is processed by an intra domain routing protocol before any other intra-domain routing protocol routes are processed that are not marked as important routes for convergence.