Abstract:
Techniques for upgrading virtual appliances in a hybrid cloud computing system are provided. In one embodiment, virtual appliances are upgraded by deploying the upgraded appliances in both a data center and a cloud, configuring the upgraded appliances to have the same IP addresses as original appliances, and disconnecting the original appliances from networks to which they are connected and connecting the upgraded appliances to those networks via the same ports previously used by the original appliances. In another embodiment, upgraded appliances are deployed in the data center and the cloud, but configured with new IP addresses that are different from those of the original appliances, and connections are switched from those of the original appliances to new connections with the new IP addresses. Embodiments disclosed herein permit virtual appliances to be upgraded or replaced with relatively little downtime so as to help minimize disruptions to existing traffic flows.
Abstract:
Techniques for stateful connection optimization over stretched networks are disclosed. Such stretched networks may extend across both a data center and a cloud. In one embodiment, configuration changes are made to cloud layer 2 (L2) concentrators used by extended networks and a cloud router such that the L2 concentrators block packets with the cloud router's source MAC address and block address resolution protocol (ARP) requests for a gateway IP address from/to cloud networks that are part of the extended networks. Further, the cloud router is configured with the same gateway IP address as that of a default gateway router in the data center and responds to ARP requests for the gateway IP address with its own MAC address. In addition, specific prefix routes (e.g., /32 routes) for virtual computing instances on route optimized networks in the cloud are injected into the cloud router and propagating to a data center router.
Abstract:
An example method of diagnosing remote sites of a distributed container orchestration system includes: receiving, at a management cluster, definition of a test suite custom resource; deploying, in response to the test suite custom resource, a first pod in the management cluster; deploying, by the first pod, a second pod in a server of a first remote site of the remote sites; checking, by the second pod, configuration of the server that includes an additional pod executing alongside the second pod, at least one virtual machine (VM) in which the second pod and the additional pod execute, a hypervisor configured to support the at least one VM, and a hardware platform on which the hypervisor executes; and returning test data from the second pod to the first pod, the test data including results of the step of checking the configuration of the server.
Abstract:
A centralized namespace controller allocates addresses in a distributed cloud infrastructure on-demand. Upon receiving a request to allocate addresses for a network to be provisioned by a cloud computing system included in the distributed cloud infrastructure, the centralized namespace controller allocates a network address that is unique within the distributed cloud infrastructure. Further, the centralized namespace controller allocates a range of virtual network interface cards (NIC) addresses that are unique within the network. The centralized namespace controller then allocates addresses from the range of virtual NIC addresses on an as-requested basis—when a virtual NIC is being created by the first cloud computing system on the network. Advantageously, by centralizing the allocation of addresses and dedicating independent NIC address ranges to different cloud computing systems, the centralized namespace controller enables stretched L2 networks between cloud computing systems while preventing duplicated addresses on the stretched networks.
Abstract:
Example methods and systems for cloud native network function deployment are described. One example may involve a computer system obtaining cluster configuration information associated with multiple single node clusters (SNCs). Based on the cluster configuration information, the computer system may configure (a) a first SNC on a first node and (b) a second SNC on a second node. The computer system may configure (a) a first virtual agent associated with the first SNC, and (b) a second virtual agent associated with the second SNC. In response to receiving a deployment request to deploy a first pod and a second pod, the computer system may process the deployment request by (a) deploying, using the first virtual agent, the first pod on the first SNC, and (b) deploying, using the second virtual agent, the second pod on the second SNC.
Abstract:
Connectivity between data centers in a hybrid cloud system is optimized by pre-loading a wide area network (WAN) optimization appliance in a first data center with data to initialize at least one WAN optimization of application. The first data center is managed by a first organization and a second data center managed by a second organization, the first organization being a tenant in the second data center. The described technique includes receiving application packets having the application data generated by an application executing in the first data center at the WAN optimization appliance from a first gateway in the first data center, and performing the at least one WAN optimization on the application packets using the pre-loaded data to initialize the at least one WAN optimization.
Abstract:
The disclosure provides an approach for simulating a virtual environment. A method includes simulating, using a virtualization simulator, a plurality of hosts; simulating, using the virtualization simulator, a plurality of virtual computing instances (VCIs) associated with the plurality of simulated hosts, based on information obtained from a cluster application programming interface (API) provider; creating, using a virtualization simulator operator, one or more node simulator schedulers; creating, using the one or more node schedulers, a node simulator; simulating, using the node simulator, a plurality of guest operating systems (OSs) associated with the plurality of simulated VCIs; and joining the plurality of simulated guest OSs to one or more node clusters in a data center via an API server.
Abstract:
Techniques for stateful connection optimization over stretched networks are disclosed. In one embodiment, hypervisor filtering modules in a cloud computing system are configured to modify packets sent by virtual computing instances (e.g., virtual machines (VMs)) in the cloud to local destinations in the cloud such that those packets have the destination Media Access Control (MAC) address of a local router that is also in the cloud. Doing so prevents tromboning traffic flows in which packets sent by virtual computing instances in the cloud to location destinations are routed to a stretched network's default gateway that is not in the cloud.
Abstract:
Techniques for creating layer 2 (L2) extension networks are disclosed. One embodiment permits an L2 extension network to be created by deploying, configuring, and connecting a pair of virtual appliances in the data center and the cloud so that the appliances communicate via secure tunnels and bridge networks in the data center and the cloud. A pair of virtual appliances are first deployed in the data center and the cloud, and secure tunnels are then created between the virtual appliances. Thereafter, a stretched network is created by connecting a network interface in each of the virtual appliances to a respective local network, configuring virtual switch ports to which the virtual appliances are connected as sink ports that receive traffic with non-local destinations, and configuring each of the virtual appliances to bridge the network interface therein that is connected to the local network and tunnels between the pair of virtual appliances.
Abstract:
A hybrid computing system includes an on-premise data center and a cloud computing system. To connect between an organization's multiple data centers, a gateway may instead utilize the connections between the private data center and the cloud computing system rather than a direct connection to the other of the organizations' data centers.