Abstract:
The present disclosure is directed generally to systems and methods for Diameter load balancing. In some embodiments, an intermediary device may receive a diameter connection request from a client that includes a CER. The intermediary device may initiate a connection with a server of a plurality of servers and place the server protocol control block in a reuse pool. Responsive to opening the connection with the server, the intermediary device may forward the received CER. The intermediary device may then receive a CEA message from the server and establish an AVP-based persistent connection. The intermediary device may modify the received CEA message, and then forward the message to the client. When the intermediary device receives a diameter message from a client, the intermediary device may match an AVP of the message with an AVP associated with a persistent server connection, and forward the diameter message to the corresponding server.
Abstract:
The present application is directed towards systems and methods for a user to configure the backup locations to use by an intermediary device providing Global Server Load Balancing (GSLB) services when a primary location is down. In some embodiments, when GSLB is based on static proximity of the location of the client to the GSLB sites and if the primary location is DOWN, then request may be load balanced among all the other locations. But this may not be desirable in many cases. So we need to provide option to the user to specify the preferred list of backup locations to service a client request. The present solution achieves this configurability by using a GSLB policy based on preferred location. One can configure preferred location(s) via a GSLB policy to redirect the client to preferred location(x). One can configure individual policies for different client locations. Based on implementation requirements, one can configure country level granularity, state level granularity and so on.
Abstract:
For multiple multi-core nodes in a cluster, the filtered statistics clients contacts the aggregator on a master node of the cluster, referred to as the cluster configuration owner (“CCO”) or cluster coordinator and expects the stats aggregated from all the cluster nodes. The aggregator on the CCO nodes relay the client request to packet engines on the CCO node and to an aggregator on each of the other nodes in the cluster. Then the CCO node aggregator gets responses from other cores on the node and responses from all other cluster node aggregators. The CCO node aggregator aggregates the responses and sends back the aggregated response to the clients. Communication between nodes is via a static authenticated communication channel.
Abstract:
The present disclosure is directed generally to systems and methods for Diameter load balancing. In some embodiments, an intermediary device may receive a diameter connection request from a client that includes a CER. The intermediary device may initiate a connection with a server of a plurality of servers and place the server protocol control block in a reuse pool. Responsive to opening the connection with the server, the intermediary device may forward the received CER. The intermediary device may then receive a CEA message from the server and establish an AVP-based persistent connection. The intermediary device may modify the received CEA message, and then forward the message to the client. When the intermediary device receives a diameter message from a client, the intermediary device may match an AVP of the message with an AVP associated with a persistent server connection, and forward the diameter message to the corresponding server.
Abstract:
For multiple multi-core nodes in a cluster, the filtered statistics clients contacts the aggregator on a master node of the cluster, referred to as the cluster configuration owner (“CCO”) or cluster coordinator and expects the stats aggregated from all the cluster nodes. The aggregator on the CCO nodes relay the client request to packet engines on the CCO node and to an aggregator on each of the other nodes in the cluster. Then the CCO node aggregator gets responses from other cores on the node and responses from all other cluster node aggregators. The CCO node aggregator aggregates the responses and sends back the aggregated response to the clients. Communication between nodes is via a static authenticated communication channel.
Abstract:
The present application is directed towards systems and methods for a user to configure the backup locations to use by an intermediary device providing Global Server Load Balancing (GSLB) services when a primary location is down. In some embodiments, when GSLB is based on static proximity of the location of the client to the GSLB sites and if the primary location is DOWN, then request may be load balanced among all the other locations. But this may not be desirable in many cases. So we need to provide option to the user to specify the preferred list of backup locations to service a client request. The present solution achieves this configurability by using a GSLB policy based on preferred location. One can configure preferred location(s) via a GSLB policy to redirect the client to preferred location(x). One can configure individual policies for different client locations. Based on implementation requirements, one can configure country level granularity, state level granularity and so on.
Abstract:
The present invention is directed towards systems and methods for load balancing by a multi-core device intermediary between clients and services. The device may establish sub-slots in each slot of the device's packet engines. The number of sub-slots may correspond to the packet engine count. Each slot may track a different number of active connections allocated to a service. The device may assign a first and second service to each packet engine in a first slot corresponding to no active connections. These services may be assigned to different sub-slots in adjacent packet engines. The device may update, responsive to allocation of a first active connection to the first service, the first service from a sub-slot in the first slot of a first packet engine, to a corresponding sub-slot in a second slot. The second slot may correspond to one active connection allocated to the first service.
Abstract:
The present invention is directed towards systems and methods for load balancing by a multi-core device intermediary between clients and services. The device may establish sub-slots in each slot of the device's packet engines. The number of sub-slots may correspond to the packet engine count. Each slot may track a different number of active connections allocated to a service. The device may assign a first and second service to each packet engine in a first slot corresponding to no active connections. These services may be assigned to different sub-slots in adjacent packet engines. The device may update, responsive to allocation of a first active connection to the first service, the first service from a sub-slot in the first slot of a first packet engine, to a corresponding sub-slot in a second slot. The second slot may correspond to one active connection allocated to the first service.