Abstract:
Techniques are provided for managing movements of virtual machines in a network. At a first switch, a virtual machine (VM) is detected. The VM is hosted by a physical server coupled to the first switch. A message is sent to other switches and it indicates that the VM is hosted by the physical server. When the first switch is paired with a second switch as a virtual port channel (vPC) pair, the message includes a switch identifier that identifies the second switch. A receiving switch receives the message from a source switch in the network comprising a route update associated with the VM. A routing table of the receiving switch is evaluated to determine whether the host route is associated with a server facing the physical port. The message is examined to determine it contains the switch identifier.
Abstract:
Coordinating gateways for multi-destination traffic across a TRILL fabric and a VXLAN/IP fabric with a plurality of TRILL IS-IS TLVs and a plurality of Layer 3 IS-IS TLVs is provided herein. The plurality of TRILL IS-IS TLVs and the plurality of Layer 3 IS-IS TLVs effectuate: grafting an IP multicast share tree with a plurality of TRILL distribution trees at only one of a plurality of gateways in a network interworking a TRILL fabric and a VXLAN/IP fabric; ensuring that multicast traffic traversing from the plurality of TRILL distribution trees is not looped back to the TRILL fabric through the VXLAN/IP fabric; restoring connectivity among a plurality of VXLAN/IP fabric partitions through the TRILL fabric if the VXLAN/IP fabric is partitioned; and restoring connectivity among a plurality of TRILL fabric partitions through the VXLAN/IP fabric if the TRILL fabric is partitioned.
Abstract:
An example method for determining an optimal forwarding path across a network having VxLAN gateways configured to implement both FGL networking and VxLAN capabilities can include learning RBridge nicknames associated with the VxLAN gateways in the network. Additionally, the method can include determining a path cost over the FGL network between each of the VxLAN gateways and a source node and a path cost over the VxLAN between each of the VxLAN gateways and a destination node. Further, the method can include determining an encapsulation overhead metric associated with the VxLAN and selecting one of the VxLAN gateways as an optimal VxLAN gateway. The selection can be based on the computed path costs over the FGL network and the VxLAN and the encapsulation overhead metric.
Abstract:
A method is provided in one example embodiment and includes establishing a pool of multicast group addresses reserved for assignment to Layer 2 (“L2”) and Layer 3 (“L3”) segment IDs of a network comprising an Internet protocol (“IP”) fabric, and assigning a first multicast group address from the pool to an L3 segment ID of a Virtual Routing and Forwarding element (“VRF”) associated with a new partition established in the network. The method further includes pushing the first multicast group address assignment to a database to provide arguments for configuration profiles, and configuring a new tenant detected on a leaf node of the network using the configuration profiles, in which the configuring comprises specifying multicast group to segment ID assignments for the tenant as specified in the configuration profiles.
Abstract:
A method is provided in one example embodiment and includes determining whether a first network element with which a second network element is attempting to establish an adjacency is a client type element. If the first network element is determined to be a client type element, the method further includes determining whether the first and second network elements are in the same network area. If the first network element is a client type element and the first and second network elements are determined to be in the same network area, the adjacency is established. Subsequent to the establishing, a determination is made whether the first network element includes an inter-area forwarder (IAF).
Abstract:
Techniques for utilizing a Software-Defined-Networking (SDN) controller and/or a Data Center Network Manager (DCNM) and network border gateway switches associated with a multi-site cloud computing network to provide reachability data indicating physical links between the border gateways disposed in different sites of the multi-site network to establish secure connection tunnels utilizing the physical links and unique encryption keys. The SDN controller and/or DCNM may be configured to generate a physical underlay model representing the physical underlay, or network transport capabilities, and/or a logical overlay model representing a logical overlay, or overlay control-plane, of the multi-site network. The SDN controller may also generate an encryption key model representing the associations between the encryption keys and the physical links between the associated network border gateway switches. The SDN controller may utilize the models to determine route paths for transmitting network traffic spanning over different sites of the multi-site network at line speed.
Abstract:
A system and a method are disclosed for enabling interoperability between data plane learning endpoints and control plane learning endpoints in an overlay network environment. An exemplary method for managing network traffic in the overlay network environment includes receiving network packets in an overlay network from data plane learning endpoints and control plane learning endpoints, wherein the overlay network extends Layer 2 network traffic over a Layer 3 network; operating in a data plane learning mode when a network packet is received from a data plane learning endpoint; and operating in a control plane learning mode when the network packet is received from a control plane learning endpoint. Where the overlay network includes more than one overlay segment, the method further includes operating as an anchor node for routing inter-overlay segment traffic to and from hosts that operate behind the data plane learning endpoints.
Abstract:
Presented herein are hybrid approaches to multi-destination traffic forwarding in overlay networks that can be used to facilitate interoperability between head-end-replication-support network devices (i.e., those that only use head-end-replication) and multicast-support network devices (i.e., those that only use native multicast). By generally using existing tunnel end-points (TEPs) supported functionality for sending multi-destination traffic and enhancing the TEPs to receive multi-destination traffic with the encapsulation scheme they do not natively support, the presented methods and systems minimize the required enhancements to achieve interoperability and circumvents any hard limitations that the end-point hardware may have. The present methods and systems may be used with legacy hardware that are commissioned or deployed as well as new hardware that are configured with legacy protocols.
Abstract:
Coexistence and migration of legacy and VXLAN networks may be provided. A first anchor leaf switch and a second anchor leaf switch may detect that they can reach each other over a Virtual Extensible Local Area Network (VXLAN) overlay layer 2 network. In response to detecting that they can reach each other over the VXLAN, the second anchor leaf switch may block VLANs mapped to the VXLAN's VXLAN Network Identifier (VNI) on the second anchor leaf switch's ports connecting to spine routers. In addition, the first anchor leaf switch and the second anchor leaf switch may detect that they can reach each other over a physical layer 2 network. In response to detecting that they can reach each other over a physical layer 2 network, the second anchor leaf switch may block Virtual Extensible Local Area Network (VXLAN) segments at the second anchor leaf switch.
Abstract:
Systems, methods, and computer-readable media for OAM in overlay networks. In response to receiving a packet associated with an OAM operation from a device in an overlay network, the system generates an OAM packet. The system can be coupled with the overlay network and can include a tunnel endpoint interface associated with an underlay address and a virtual interface associated with an overlay address. The overlay address can be an anycast address assigned to the system and another device in the overlay network. Next, the system determines that a destination address associated with the packet is not reachable through the virtual interface, the destination address corresponding to a destination node in the overlay network. The system also determines that the destination address is reachable through the tunnel endpoint interface. The system then provides the underlay address associated with the tunnel endpoint interface as a source address in the OAM packet.