摘要:
Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with load balancing across multiple network address translation (NAT) instances and/or processors. N network address translation (NAT) processors and/or instances are each assigned a portion of the source address traffic in order to load balance the network address translation among them. Additionally, the address space of translated addresses is partitioned and uniquely assigned to the NAT processors and/or instances such that the identification of the assigned NAT processor and/or instance associated with a received translated address can be readily determined there from, and then used to network address translate that received packet.
摘要:
Disclosed is a system for servers to redirect client requests to other servers in order to distribute client traffic among the servers. A client is assigned to a server although the client may be unaware of that assignment. When the client accesses a server, a server possibly identified to the client by a name service, the server checks the client's assignment. If the client is not assigned to this server, then in some scenarios this server redirects the client to its assigned server. The client responds by sending its request to the assigned server. In other scenarios, the first server accessed by the client proxies the client's traffic to the assigned server. A database is kept of client-to-server assignments. If the present load distribution is less than ideal (e.g., clients are assigned to an unavailable server), then the assignment database is updated to reflect how the load should be distributed.
摘要:
A method and system for forwarding messages received at a traffic manager. A traffic manager receives a message from a first connection to a client computer. At least a part of the message is to be forwarded to a server. If a connection exists to the server that matches the first connection, at least a part of the message is forwarded to the server by employing the existing connection. Otherwise, a source address is selected with which to communicate with the server. A new connection that includes the source address and a destination address associated with the server is opened. In addition, information associating the source address and the destination address with the first connection is stored. This information may then be used to map a response received from the server to the first connection.
摘要:
The inventions concerns a network data storage system comprising a storage unit, at least one network client and an intermediate network switch.The storage unit contains at least two data storage servers each comprises a local storage component containing digital file segments of at least one digital file and is adapted to execute a local digital file management method organising the physical location of the digital file segments.Each data storage server is adapted to communicate with the other data storage servers and to execute a distributed digital file management method.The distributed digital file management method maintains a record of operations and communicates internally with the other data storage servers to obtain information concerning the digital file segments contained on the other data storage servers and an overview of all information concerning all digital files stored on the storage unit.
摘要:
A method of load distribution for a cluster of two or more nodes. The method comprises receiving an initial request packet on a network device having a virtual IP address; forwarding the request packet from the network device to a cluster of at least two nodes, wherein each of the at least two nodes has an internal dispatcher module and an unique and non-conflicting virtual IP address; establishing one of the at least two nodes as a priority dispatcher or dispatcher endpoint, wherein if any one node fails, the virtual IP address of the one node which is no longer active falls back to another node within the cluster based on cluster priorities; dispatching the request packet to one of the nodes associated with the cluster; and forwarding the request from one of the nodes to a switching device.
摘要:
In one embodiment, a method can include: (i) classifying a packet in a server load balancer (SLB) for determining if the packet is destined for a virtual Internet protocol (VIP) address hosted on the SLB; (ii) selecting a server from a group of servers representing the VIP address; (iii) changing a destination IP address of the packet from the VIP address to a real IP address of the selected server; and (iv) recirculating the packet for repeating the classifying.
摘要:
The present invention relates to a method of performing register functions in Session Initiation Protocol (SIP) load balancer and an SIP load balancer performing the method, and more particularly, to a method and apparatus for registering information of a user agent in at least one real SIP server by the SIP load balancer. According to the present invention, the number of messages transmitted and received by a user agent is decreased, thereby notably reducing the amount of information processed by the user agent.
摘要:
To solve problems in that a load on a VPN device is large in a case where the number of terminal devices increases in encrypted communication using a VPN technique, and that only communication between the terminal device and the VPN device is encrypted, thus disabling end-to-end encrypted communication, a communication system is provided, including: a terminal device; a plurality of blades; and a management server that manages the blades, in which: the management server selects a blade, authenticates the terminal device and the selected blade, and mediates encrypted communication path establishment between the terminal device and the selected blade; the terminal device and the blade perform encrypted communication without the mediation of the management server; and the management server requests a validation server to authenticate each terminal.
摘要:
To achieve encryption load balancing, a dispatcher, in communication with one or more engines, delegates one or more requests to the one or more engines. The engines execute cryptographic operations on data. The dispatcher may implement one or more load balancing algorithms to delegate requests to engines in accordance with data protection classes and rules for improved efficiency, performance, and security. To achieve distributed policy enforcement, the engines may also analyze whether the request violates an item access rule.
摘要:
A transparent load balancer receives incoming Ethernet frames having incoming source and destination IP and MAC addresses. The load balancer diverts the incoming frames to one of several multi-application platforms. The incoming frames are communicated across a first TCP connection that terminates on a multi-application platform. The first TCP connection is defined by TCP source and destination ports. The transparent load balancer receives outgoing frames from the multi-application platform and outputs the outgoing frames with source and destination IP and MAC addresses that are identical to the incoming source and destination IP and MAC addresses. The outgoing frames are communicated across a second TCP connection, the second TCP connection being defined by the same TCP source port and TCP destination port of the first TCP connection. The transparent load balancer and multi-application platforms can be inserted into a running network without noticeable interruption to devices on the network.