Abstract:
One or more embodiments provide techniques for migrating virtual machines (VMs) from a private data center to a cloud data center. A hybrid cloud manager determines a scope of migration from the private data center to the cloud data center. The hybrid cloud manager groups each VM included in the scope of migration into one or more clusters. The hybrid cloud manager defines one or more migration phases. Each migration phase comprises a subset of the one or more clusters. The hybrid cloud manager generates a migration schedule based on at least the one or more migration phases. The hybrid cloud manager migrates the VMs from the private data center to the cloud data center in accordance with the migration schedule.
Abstract:
Techniques for stateful connection optimization over stretched networks are disclosed. Such stretched networks may extend across both a data center and a cloud. In one embodiment, configuration changes are made to cloud layer 2 (L2) concentrators used by extended networks and a cloud router such that the L2 concentrators block packets with the cloud router's source MAC address and block address resolution protocol (ARP) requests for a gateway IP address from/to cloud networks that are part of the extended networks. Further, the cloud router is configured with the same gateway IP address as that of a default gateway router in the data center and responds to ARP requests for the gateway IP address with its own MAC address. In addition, specific prefix routes (e.g., /32 routes) for virtual computing instances on route optimized networks in the cloud are injected into the cloud router and propagating to a data center router.
Abstract:
The disclosure herein describes an edge device of a network for distributed policy enforcement. During operation, the edge device receives an initial packet for an outgoing traffic flow, and identifies a policy being triggered by the initial packet. The edge device performs a reverse lookup to identify at least an intermediate node that is previously traversed by the initial packet and traffic parameters associated with the initial packet at the identified intermediate node. The edge device translates the policy based on the traffic parameters at the intermediate node, and forwards the translated policy to the intermediate node, thus facilitating the intermediate node in applying the policy to the traffic flow.
Abstract:
Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E2 unit, to perform an operation. The method uses a driver of the machine to communicate directly with the hardware accelerator to direct the hardware accelerator to perform a set of computations associated with the operation. This driver allows the communication with the hardware accelerator to bypass an intervening set of drivers executing on the host computer between the machine's driver and the hardware accelerator. Through this driver, the application in some embodiments receives the computation results, which it then provides to one or more O-RAN components (e.g., to the E2 unit that provided the data, another E2 unit or another control plane or edge application).
Abstract:
Connectivity between data centers in a hybrid cloud system having a first data center managed by a first organization and a second data center managed by a second organization, the first organization being a tenant in the second data center, is optimized. According to the described technique, a path-optimized connection is established through a wide area network (WAN) between a first gateway of a first data center and a second gateway of a second data center for an application executing in the first data center based on performance of paths across a set of Internet Protocol (IP) flows. Application packets received from the application at the first gateway are forwarded to a WAN optimization appliance in the first data center. WAN optimized application packets received from the WAN optimization appliance at the first gateway are then sent to the second gateway over the path-optimized connection.
Abstract:
Techniques for creating layer 2 (L2) extension networks are disclosed. One embodiment permits an L2 extension network to be created by deploying, configuring, and connecting a pair of virtual appliances in the data center and the cloud so that the appliances communicate via secure tunnels and bridge networks in the data center and the cloud. A pair of virtual appliances are first deployed in the data center and the cloud, and secure tunnels are then created between the virtual appliances. Thereafter, a stretched network is created by connecting a network interface in each of the virtual appliances to a respective local network, configuring virtual switch ports to which the virtual appliances are connected as sink ports that receive traffic with non-local destinations, and configuring each of the virtual appliances to bridge the network interface therein that is connected to the local network and tunnels between the pair of virtual appliances.
Abstract:
A hybrid computing system includes an on-premise data center and a cloud computing system. To connect between an organization's multiple data centers, a gateway may instead utilize the connections between the private data center and the cloud computing system rather than a direct connection to the other of the organizations' data centers.
Abstract:
Techniques for dynamically configuring a dynamic host configuration protocol (DHCP) server in a virtual network environment are described. In one example embodiment, DHCP bindings are configured using virtual machine (VM) inventory objects. Further, the configured DHCP bindings are transformed by replacing the VM inventory objects in the configured DHCP bindings with associated media access control (MAC) addresses using a VM object attribute table. Furthermore, the transformed DHCP bindings are sent to the DHCP sever for assigning Internet protocol (IP) addresses to multiple VMs running on a plurality of host computing systems in a computing network.
Abstract:
Techniques for dynamic configuration of a load balancer in a virtual network environment are described. In one example embodiment, load balancing rules are configured using virtual machine (VM) inventory objects. The configured load balancing rules are then transformed by replacing the VM inventory objects in the configured load balancing rules with associated Internet protocol (IP) addresses using an IP address management (IPAM) table or a network address translation (NAT) table. The transformed load balancing rules are then sent to the load balancer for load balancing network traffic between a plurality of VMs running on one or more host computing systems in one or more computing networks.
Abstract:
Techniques for dynamic configuration of a domain name system (DNS) server in a virtual network environment are described. In one example embodiment, DNS rules are configured using virtual machine (VM) inventory objects and associated DNS names. Further, the configured DNS rules are transformed by replacing the VM inventory objects in the configured DNS rules with associated Internet protocol (IP) addresses using an IP address management (IPAM) table or a network address translation (NAT) table and the DNS names in the configured DNS rules with modified DNS names using a zone table and a view table. Furthermore, the transformed DNS rules are sent to the DNS server for performing domain name resolutions associated with multiple VMs running on a plurality of host computing systems in a computing network.