摘要:
Systems and methods for computing the paths of MPLS Traffic Engineering LSPs across Autonomous System and/or area boundaries. A distributed path computation algorithm exploits multiple path computation elements (PCEs) to develop a virtual shortest path tree (VSPT) resulting in computation of an end-to-end optimal (shortest) path. In some implementations, the VSPT is computed recursively across all the Autonomous Systems and/or areas between the head-end and tail-end of the Traffic Engineering LSP.
摘要:
Systems and methods for computing the paths of MPLS Traffic Engineering LSPs across Autonomous System and/or area boundaries. A distributed path computation algorithm exploits multiple path computation elements (PCEs) to develop a virtual shortest path tree (VSPT) resulting in computation of an end-to-end optimal (shortest) path. In some implementations, the VSPT is computed recursively across all the Autonomous Systems and/or areas between the head-end and tail-end of the Traffic Engineering LSP.
摘要:
Systems and methods for preemption of Traffic Engineering LSPs such that preemption decisions are made in a coordinated fashion along the path of a new LSP and computation of a new path for a preempted LSP can take advantage of knowledge of newly unavailable links. The efficiency of the preemption mechanism is greatly increased and the undesirable effects of heterogeneous preemption decisions are limited. The amount of signaling may also be significantly reduced. In one implementation, these advantages are achieved by exploiting an upstream preemption feedback mechanism that uses an incremental timer to delay preemption decisions until feedback is available.
摘要:
Systems and methods for preemption of Traffic Engineering LSPs such that preemption decisions are made in a coordinated fashion along the path of a new LSP and computation of a new path for a preempted LSP can take advantage of knowledge of newly unavailable links. The efficiency of the preemption mechanism is greatly increased and the undesirable effects of heterogeneous preemption decisions are limited. The amount of signaling may also be significantly reduced. In one implementation, these advantages are achieved by exploiting an upstream preemption feedback mechanism that uses an incremental timer to delay preemption decisions until feedback is available.
摘要:
A technique triggers packing of path computation requests (PCRs) for traffic engineering (TE) label switched paths (LSPs) that are sent from one or more label-switched routers (LSRs) to a path computation element (PCE) of a computer network. According to the novel technique, incoming PCRs are packed into sets in response to a certain event, and one or more TE-LSPs (paths) are computed for each PCR of a particular set based on the PCRs of that set. Specifically, the PCE detects an event in the network (“network event”) indicating that an increase in the number of incoming PCRs has occurred, or that an increase is likely to occur due to, e.g., a change in a network element. Once the net-work event has been detected, the PCE packs the incoming PCRs into configured-length sets, such as, e.g., for a specified time interval or a certain number of PCRs. The PCE computes paths for each PCR of a particular set while considering the other PCRs of that set, thereby reducing race conditions, signaling overhead, and set-up failures.
摘要:
A technique is provided for one or more network nodes to deterministically select data flows to preempt. In particular, each node employs a set of predefined rules which instructs the node as to which existing data flow should be preempted in order to admit a new high-priority data flow. The rules are precisely defined and are common to all nodes configured in accordance with the present invention. Illustratively, a network node not only selects a data flow to preempt, but additionally may identify other “fate sharing” data flows that may be preempted. As used herein, a group of data flows has a fate-sharing relationship if the application instance(s) containing the data flows functions adequately only when all the fate-shared flows are operational. In a first illustrative embodiment, after a data flow in a fate-sharing group is preempted, network nodes may safely tear down the group's remaining data flows. In a second illustrative embodiment, when a data flow is preempted, all its fate-shared data flows are marked as being “at risk.” Because the at-risk flows are not immediately torn down, it is less likely that resources allocated for the at-risk flows may be freed then subsequently used to establish relatively lower-priority data flows instead of relatively higher-priority data flows.
摘要:
A technique triggers packing of path computation requests (PCRs) for traffic engineering (TE) label switched paths (LSPs) that are sent from one or more label-switched routers (LSRs) to a path computation element (PCE) of a computer network. According to the novel technique, incoming PCRs are packed into sets in response to a certain event, and one or more TE-LSPs (paths) are computed for each PCR of a particular set based on the PCRs of that set. Specifically, the PCE detects an event in the network (“network event”) indicating that an increase in the number of incoming PCRs has occurred, or that an increase is likely to occur due to, e.g., a change in a network element. Once the network event has been detected, the PCE packs the incoming PCRs into configured-length sets, such as, e.g., for a specified time interval or a certain number of PCRs. The PCE computes paths for each PCR of a particular set while considering the other PCRs of that set, thereby reducing race conditions, signaling overhead, and set-up failures.
摘要:
A technique dynamically restores original attributes of a Traffic Engineering La-bel Switched Path (TE-LSP) that are provided in a source domain for a destination domain when traversing one or more intermediate domains that may translate the TE-LSP attributes in a computer network. According to the novel technique, a head-end node requests an interdomain TE-LSP having one or more original TE-LSP attributes (e.g., priority, bandwidth, etc.) using a signaling exchange. The head-end node may also request restoration of the original TE-LSP attributes upon entrance into the destination domain. Intermediate domains (e.g., border routers of the domains) receiving the request may translate the original TE-LSP attributes into corresponding intermediate domain TE-LSP attributes. When the request reaches the destination domain, the intermediate domain TE-LSP attributes of the requested TE-LSP are restored into the original TE-LSP attributes.
摘要:
Systems and methods for distinguishing a node failure from a link failure are provided. By strengthening the assumption of independent failures, bandwidth sharing among backup tunnels protecting links and nodes of a network is facilitated as well as distributed computation of backup tunnel placement. Thus a backup tunnel overlay network can provide guaranteed bandwidth in the event of a failure.
摘要:
A technique is provided for one or more network nodes to deterministically select data flows to preempt. In particular, each node employs a set of predefined rules which instructs the node as to which existing data flow should be preempted in order to admit a new high-priority data flow. The rules are precisely defined and are common to all nodes configured in accordance with the present invention. Illustratively, a network node not only selects a data flow to preempt, but additionally may identify other “fate sharing” data flows that may be preempted. As used herein, a group of data flows has a fate-sharing relationship if the application instance(s) containing the data flows functions adequately only when all the fate-shared flows are operational. In a first illustrative embodiment, after a data flow in a fate-sharing group is preempted, network nodes may safely tear down the group's remaining data flows. In a second illustrative embodiment, when a data flow is preempted, all its fate-shared data flows are marked as being “at risk.” Because the at-risk flows are not immediately torn down, it is less likely that resources allocated for the at-risk flows may be freed then subsequently used to establish relatively lower-priority data flows instead of relatively higher-priority data flows.