Abstract:
In an example, a network switch is configured to operate natively as a load balancer. The switch receives incoming traffic on a first interface communicatively coupled to a first network, and assigns the traffic to one of a plurality of traffic buckets. This may include looking up a destination IP of an incoming packet in a fast memory such as a ternary content-addressable memory (TCAM) to determine whether the packet is directed to a virtual IP (VIP) address that is to be load balanced. If so, part of the source destination IP address may be used as a search tag in the TCAM to assign the incoming packet to a traffic bucket or IP address of a service node.
Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
In accordance with one example embodiment, a system configured for providing multifunctional switching is disclosed. The system is configured for filtering at least some incoming traffic to select network packets originating from one or more predefined sources and destined to a predefined destination, load balancing at least some of the selected network packets among a plurality of server nodes to assign each network packet to one server node of the plurality of server nodes, for each network packet assigned to one server node of the plurality of server nodes replacing a destination address of the predefined destination with a destination address of the assigned server node, and forwarding the each network packet assigned to one server node in accordance with the replaced destination address in the network packet.
Abstract:
In an example, there is disclosed a load balancing network apparatus, including a first network interface operable to communicatively couple to a first network; a plurality of second network interfaces operable to communicatively couple to a second network; and one or more logic elements providing a load balancing engine operable for: receiving an address mask; receiving an incoming network packet; masking a destination virtual network address with the address mask to match a plurality of virtual ip addresses; and load balancing the incoming network packet to the plurality of service nodes. There is also disclosed one or more computer-readable mediums including instructions for carrying out the operations, and a method of providing load balancing including carrying out the operations.
Abstract:
In an example, there is disclosed a network apparatus, comprising: one or more logic elements, including at least one hardware logic element, to provide a network manager engine to: provide a switched fabric management function; communicatively couple to at least one network switch, the network switch configured to provide optional native hardware-based load balancing; monitor one or more load balancing factors; and at least partly responsive to the one or more load balancing factors, configure native hardware-based load balancing on the at least one network switch.
Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.
Abstract:
In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.
Abstract:
In an example, there is provided a network apparatus for providing native load balancing within a switch, including a first network interface operable to communicatively couple to a first network; a plurality of second network interfaces operable to communicatively couple to a second network, the second network comprising a service pool of service nodes; one or more logic elements providing a switching engine operable for providing network switching; and one or more logic elements comprising a load balancing engine operable for: load balancing incoming network traffic to the service pool via native hardware according to a load balancing configuration; detecting a new service node added to the service pool; and adjusting the load balancing configuration to account for the new service node; wherein the switching engine and load balancing engine are configured to be provided on the same hardware as each other and as the first network interface and plurality of second network interfaces.
Abstract:
In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.