Abstract:
Conditional address translation is performed in a multi-tenant cloud infrastructure to effectively support tenant-assigned addresses. For each tenant, the multi-tenant cloud infrastructure deploys both a private network used to communicate between the tenant and the cloud and a tenant-facing gateway to manage the private network. The multi-tenant cloud infrastructure also includes an externally-facing gateway used to communicate between the multi-tenant cloud and a public network. The tenant-facing gateways are configured to bypass address translation—providing consistent addressing across each private network irrespective of the physical location of resources linked by the private network. By contrast, the public-facing gateway is configured to translate source addresses in outgoing packets to addresses that are unique within the public network. Advantageously, discriminately mapping addresses enables multiple tenants to interact in a uniform fashion with both on-premises resources and cloud-hosted resources without incurring undesirable address collisions between tenants.
Abstract:
An example method of optimizing connectivity between data centers in a hybrid cloud system having a first data center managed by a first organization and a second data center managed by a second organization, the first organization being a tenant in the second data center. The method includes probing a wide area network (WAN) with test packets by varying an internet protocol (IP) flow tuple of the test packets across a set of IP flows. The method includes identifying a plurality of paths between a gateway of the first data center and another gateway of the second data center associated with the set of IP flows. The method further includes selecting an IP flow from the set of IP flows for an application executing in the first data center. The method further includes establishing a path-optimized connection between the gateway and the other gateway through the WAN having the selected IP flow for use by the application.
Abstract:
An approach is disclosed for steering network traffic away from congestion hot-spots to achieve better throughput and latency. In one embodiment, multiple Foo-over-UDP (FOU) tunnels, each having a distinct source port, are created between two endpoints. As a result of the distinct source ports, routers that compute hashes of packet fields in order to distribute traffic flows across network paths will compute distinct hash values for the FOU tunnels that may be associated with different paths. Probes are scheduled to measure network metrics, such as latency and liveliness, of each of the FOU tunnels. In turn, the network metrics are used to select particular FOU tunnel(s) to send traffic over so as to avoid congestion and high-latency hotspots in the network.
Abstract:
A cloud computing system may include multiple cloud data centers. A gateway may establish connections between a cloud providers' multiple data centers using knowledge about the types of applications workloads executing within the cloud computing system, and may be further based on determines policies indicating priorities for routing traffic for the application workloads.
Abstract:
A method for managing an application executing in a computing system is disclosed as including a private cloud operated by a first organization and a multi-tenant public cloud of which the first organization is one of the tenants. The method comprises instantiating a first virtual object in the private cloud and instantiating a second virtual object in the public cloud for executing the application cooperatively with the first virtual object. Mapping associated with the first virtual object is generated, wherein the mapping comprises a first identifier having a context of the private cloud and a second identifier having a context of the public cloud. The method further includes detecting migration of the first or second virtual object such that both of the first and second virtual objects are instantiated in a single one of the private and public clouds and updating the mapping to reflect the migration.
Abstract:
A centralized namespace controller allocates addresses in a distributed cloud infrastructure on-demand. Upon receiving a request to allocate addresses for a network to be provisioned by a cloud computing system included in the distributed cloud infrastructure, the centralized namespace controller allocates a network address that is unique within the distributed cloud infrastructure. Further, the centralized namespace controller allocates a range of virtual network interface cards (NIC) addresses that are unique within the network. The centralized namespace controller then allocates addresses from the range of virtual NIC addresses on an as-requested basis—when a virtual NIC is being created by the first cloud computing system on the network. Advantageously, by centralizing the allocation of addresses and dedicating independent NIC address ranges to different cloud computing systems, the centralized namespace controller enables stretched L2 networks between cloud computing systems while preventing duplicated addresses on the stretched networks.
Abstract:
A hybrid cloud computing system is managed by determining communication affinity between a cluster of virtual machines, where one virtual machine in the cluster executes in a virtualized computing system, and another virtual machine in the cluster executes in a cloud computing environment, and where the virtualized computing system is managed by a tenant that accesses the cloud computing environment. After determining a target location in the hybrid cloud computing system to host the cluster of virtual machines based on the determined communication affinity, at least one of the cluster of virtual machines is migrated to the target location.
Abstract:
A cloud computing system retrieves routing entries associated with a particular tenant of the cloud computing system and a subset of a routing table of the entire cloud computing system. The routing entries are loaded into a networking switch, which is configured to route network packets using the loaded subset of routing entries, using a general-purpose processor rather than a costly dedicated ASIC.
Abstract:
A centralized namespace controller allocates addresses in a distributed cloud infrastructure on-demand. Upon receiving a request to allocate addresses for a network to be provisioned by a cloud computing system included in the distributed cloud infrastructure, the centralized namespace controller allocates a network address that is unique within the distributed cloud infrastructure. Further, the centralized namespace controller allocates a range of virtual network interface cards (NIC) addresses that are unique within the network. The centralized namespace controller then allocates addresses from the range of virtual NIC addresses on an as-requested basis—when a virtual NIC is being created by the first cloud computing system on the network. Advantageously, by centralizing the allocation of addresses and dedicating independent NIC address ranges to different cloud computing systems, the centralized namespace controller enables stretched L2 networks between cloud computing systems while preventing duplicated addresses on the stretched networks.
Abstract:
The order of migrating virtual computing instances from a private data center to a public cloud is optimized using a TSP solver. The method of migrating a plurality of virtual computing instances that are in communication with each other within a private data center to a public cloud includes the steps of assigning, for each different pair of virtual computing instances, a numerical value that represents an amount of data transmission between the pair over a predetermined period of time, determining a recommended order of migration for the virtual computing instances based on the assigned numerical values, and migrating the virtual computing instances according to the recommended order.