摘要:
A network node computes a fair share data rate for the sharing of a shared communication channel in a local area network. The network node determines the required information for computing the fair share by snooping the network, by receiving the required information from other network nodes, or a combination of the two techniques. Alternatively, instead of computing the fair share data rate, the network node may receive the fair share data rate which was computed by another network node. The fair share data rate is enforced by the network node in a network protocol stack layer above the media access control layer. In one embodiment, the network protocol stack layer above the media access control layer is the link layer.
摘要:
A distributed network management function is implemented in a computer network using a set of active nodes. Each of the active nodes comprises a router and a logically-separate active engine. The router in a given one of the active nodes diverts active packets associated with the network management function to the corresponding active engine for processing. The active engine supports one or more sessions, based at least in part on the active packets, for implementing at least a portion of the network management function. Each of the sessions supported by the active engine corresponds to a particular distributed task to be performed in the network, and has associated therewith a unique network identifier, such that different programs on different network nodes can belong to the same session. The router and active engine at a given one of the nodes may reside on the same machine, or on physically-separate machines.
摘要:
A method and apparatus for determining locations for and placing k caches in a network for optimizing performance of a network parameter. The method includes the steps of selecting a placement parameter l that is greater than 0, assigning l caches to l arbitrary nodes in the network. selecting l caches to remove from the network, assigning l+1 caches to every possible location in the network, computing and recording network performance data on the network cost parameter for each location and for each selection of l caches, determining a location where the network performance data on the network cost parameter computed and recorded is optimized, assigning l+1 caches to the determined location, and repeating the above steps of selecting l caches, assigning l+1 caches, computing and recording network performance, determining a location, and assigning l+1 caches for k−1 iterations.
摘要:
Coordinated SYN denial of service (CSDoS) attacks are reduced or eliminated by a process that instructs a layer 4-7 switch to divert a small fraction of SYN packets destined to a server S to a web guard processor. The web guard processor acts as a termination point in the connection with the one or more clients from which the packets originated, and upon the establishment of a first TCP connection with a legitimate client, opens a new TCP connection to the server and transfers the data between these two connections. It also monitors the number of timed-out connections to each client. When a CSDoS attack is in progress, the number of the forged attack packets and hence the number of timed-out connections increases significantly. If this number exceeds a predetermined threshold amount, the web guard processor declares that this server is under attack. It then reprograms the switch to divert all traffic (i.e. SYN packets) destined to this server to the web guard processor, or to delete all SYN packets to the server in question. If the number of timed-out connections increases, it can also inform other web guard processor arrangements, and/or try to find the real originating hosts for the forged packets. In either event, the server is thus shielded from, and does not feel the effects of, the DoS attack. Alternatively, a simpler approach is to arrange layer 4-7 switches to forward SYN packets to respective “null-cache” TCP proxies that each are arranged to operate without an associated cache, and therefore be inexpensive to install and operate. These null-cache TCP proxies, when subject to a CSDoS attack, will not successfully establish a TCP connection with a malicious host, due to the nature of the attack itself. Accordingly, no connections will be made from the null-cache TCP proxies to the server under attack, and the server will be protected.
摘要:
Denial of service (CSDoS) attacks are managed by a process that diverts a fraction of SYN packets destined to a server S to a web guard processor. The web guard processor acts as a termination point in the connection with the one or more clients from which the packets originated, and upon the establishment of a first TCP connection with a legitimate client, opens a new TCP connection to the server and transfers the data between these two connections. It also monitors the number of timed-out connections. When an attack is in progress, the number of the forged attack packets and timed-out connections increases significantly. If this number exceeds a predetermined threshold amount, the web guard processor declares that this server is under attack. The switch diverts all traffic (i.e. SYN packets) destined to this server to the web guard processor, or to delete all SYN packets to the server.
摘要:
Coordinated SYN denial of service (CSDoS) attacks are reduced or eliminated by a process that instructs a switch to divert SYN rackets destined to a server to a TCP proxy which, when subject to a CSDoS attack, will not successfully establish a TCP connection with a host. CSDoS attacks are reduced or eliminated by a process that includes forwarding a sampling of packets destined to a server to a processor and, when packets in the sampling indicate an attack, arranging the switch to divert all packets destined to the server to the processor. CSDoS attacks are reduced or eliminated in a system including a switch, a server, and a processor, where the processor is adapted to control the network switch to divert all SYN packets destined to the server to the processor based on monitoring a number of timed-out connections between the processor and one or more clients.
摘要:
A packet data filter which stores ordered rules and sequentially applies the rules to received data packets to determine the disposition of the data packet. The packet filter maintains a match count in memory which indicates the number of times each rule matched an incoming data packet. Periodically, at the initiation of a user, or based on operating parameters of the filter, the rules are automatically re-ordered based on the match count. As a result of the re-ordering, rules with higher match counts are moved earlier in the sequential evaluation order and rules with lower match counts are moved later in the sequential evaluation order. As such, rules which are more likely to match incoming data packets are evaluated earlier, thus avoiding the evaluation of later rules. In order to prevent a re-ordering which would change the overall security policy of the packet filter, pairs of rules are compared to determine if they conflict (i.e., the swapping of the two rules would result in a change in the overall security policy). During re-ordering, the swapping of conflicting rules is prevented.
摘要:
A joint coupling for axially connecting together a first joint and a second joint, which includes first and second joints each having a complementary member sized and shaped so that, when the joints are properly aligned, the complementary members form an overlap structure which is effectively a continuation of the first joint or the second joint, and a sleeve which is slidable over the overlap structure to secure the joint coupling. The inside surfaces of the sleeve are tapered, as are the outside surfaces of the overlap structure such that when the sleeve is slid completely over the overlap structure, the inside surfaces of the sleeve make full contact with the outside surfaces of the overlap structure to firmly secure the joint coupling. Two or more joints may be joined at any desired relative orientation by use of a hub which has complementary elements which can accommodate the complementary members of the joints. The joint couplings can be readily used to create complicated structures of virtually any desired shape.
摘要:
User information describing a group of users of a distributed computer system configured to store and retrieve individualized user data associated with individual ones of the group of users, and system resource information associated with the distributed computer system, may be obtained. A global distribution plan describing a distribution of at least a portion of the individualized user data associated with the group may be determined based on a global optimization function of the obtained user information and system resource information associated with the distributed computer system, wherein the global optimization function is based on optimizing a global distribution of the portion of the individualized user data based on a determination of a measure of performance and fault tolerance associated with a model of the distributed computer system configured in accordance with the global distribution plan. The determined global distribution plan may be provided to a device for processing.
摘要:
A technique for managing network elements significantly reduces the amount of monitoring related traffic by using a combination of aperiodic polling and asynchronous event reporting. A global resource (e.g., a network of interconnected nodes or resources) is partitioned into a plurality of separate nodes, giving a fixed resource budget to each of the nodes. When any of the nodes exceeds its budget, based upon local monitoring at that node, the node triggers a report, typically sending a message to a central manager. In response, the central manager then and only then issues a global poll of all (or substantially all) of the nodes in the network. A rate based technique can also be used to monitor resource usage at the nodes, and send a message to a central monitoring location only when the rate at which the value of a local variable changes is too high.