摘要:
Systems and methods for automatically controlling efficient operation of a plurality of network appliances operatively linked and networked to balance network traffic load across a plurality of network appliances that are selectively enabled. The system facilitating performance of the method includes at least a plurality of network appliances operatively connected to a switch and controlled by a network access control module. During system operation, at any given moment in time, the plurality network appliances operate in one of two modes, fully operational or stand-by. The network appliances of the plurality that are fully operational and thereby consuming full operational power depends upon the network traffic load at any given moment in time. The network appliances functioning in a stand-by mode consume low power levels which are sufficient to allow a network appliance in stand-by mode to receive a command signal directing it to switch from stand-by to full operational mode.
摘要:
Layer 7 switching may be accomplished using one or more caches placed throughout a computer network. Changes to a file on a server may be detected and propagated throughout the network. At the switch or router level, once notification of changes to a file is received, the content may be retrieved from the server and placed in a connected cache. A routing table entry may be created for the content and also placed in the cache. The routing table entry may contain an original location field identifying the original location of the content, a distance field indicating a distance from the cache to the server, and a field indicating a version number of the content. Additional fields may also be contained within the routing table entry. When a user requests a specific file, rather than forward the request directly to the server containing the original file, the request may be handled by the router closest to the user which has a connected cache containing the content. This allows a user's request to be handled much more quickly and efficiently than prior art solutions.
摘要:
In order to direct content requests to an appropriate content serving site in a computer network, a phased learning approach is utilized to ensure that, as best as possible, the request is made to content serving site with the shortest delay. In a setup phase, an indirect path return geographic sever load balancer times sends transit time requests to all of the individual content serving sites so that the transit requests all arrive at the content serving sites at the same time. Therefore, when the requesting fixed location receives communications from the content serving sites, it can easily tell which content serving site has the least delay by an established race condition. The winner of the race may then be relayed to the indirect path return geographic server load balancer for later usage. In an execution mode, only the m fastest content serving sites and n other sites (used to test random and new sites) are sent a transit time request when a DNS request arrives from the requesting fixed location. The particular m fastest content serving sites and n other sites may be dynamically updated so as to ensure the most reliable directing of requests. This solution provides a very efficient and effective means by which to determining closest content serving sites while keeping load balancer-created traffic at a minimum.
摘要:
A method and apparatus of a device that determines a cause and effect of congestion in this device is described. In an exemplary embodiment, the device measures a queue group occupancy of a queue group for a port in the device, where the queue group stores a plurality of packets to be communicated through that port. In addition, the device determines if the measurement indicates a potential congestion of the queue group, where the congestion prevents a packet from being communicated within a time period. If potential congestion exists on that queue group, the device further gathers information regarding packets to be transmitted through that port. For example, the device can gather statistics packets that are stored in the queue group and/or new enqueue packets.
摘要:
An adjunct network device includes several ports, an uplink interface, and an adjunct forwarding engine coupled to the ports and the uplink interface. A first port is configured to receive a packet, which includes a destination address. The adjunct forwarding engine is configured to send the packet to the uplink interface if the destination address is not associated with any of the ports. The packet is sent to one of the ports if the destination address is associated with the one of the ports.
摘要:
A method and apparatus of a device that determines a network policy for an attached device based on one or more characteristics of the attached device is described. In one example, a network element detects a device on a port coupled to a link connecting the network element and the device. In response to the detecting of the device on the port, the network element further determines a device configuration signature from the device, where the device configuration signature based on a configuration of the device. The network element additionally determines a port-based network policy based on the device configuration signature. The network element applies the port-based network policy to the port, wherein the network element applies the port-based network policy to process network data communicated through the port.
摘要:
In one embodiment, when a network element is to be removed from or inserted into a network a Graceful Operations Manager schedules graceful shut-down and/or start-up routines for different protocols and/or components on the network element in an optimal order based on dependencies between the different protocols and components. The Graceful Operations Manager communicates with the different components at different stages of their shut-down or start-up process and communicates information on the standby topology across components and/or protocols to enable the synchronization of the standby topology computation on all components and/or protocols that are affected by the removal or insertion.
摘要:
A method for cable diagnostics in a network includes performing a test to determine initial state information for each of a plurality of lines coupled to a switch and storing the initial state information in a database. When a change in the state of a line is detected, the test is re-run to determine new state information of the line. The new state information is stored in the database and a message that identifies the change in state and a likely cause of the state change is issued to a network operator. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
摘要:
In one embodiment, when a network element is to be removed from or inserted into a network a Graceful Operations Manager schedules graceful shut-down and/or start-up routines for different protocols and/or components on the network element in an optimal order based on dependencies between the different protocols and components. The Graceful Operations Manager communicates with the different components at different stages of their shut-down or start-up process and communicates information on the standby topology across components and/or protocols to enable the synchronization of the standby topology computation on all components and/or protocols that are affected by the removal or insertion.
摘要:
In order to direct content requests to an appropriate content serving site in a computer network, a phased learning approach is utilized to ensure that, as best as possible, the request is made to content serving site with the shortest delay. In a setup phase, an indirect path return geographic sever load balancer times queries to all of the individual content serving sites so that the queries all arrive at the content serving sites at the same time. Therefore, when the requesting fixed location receives communications from the content serving sites, it can easily tell which content serving site has the least delay by an established race condition. The winner of the race may then be relayed to the indirect path return geographic server load balancer for later usage. In an execution mode, only the m fastest content serving sites and n other sites (used to test random and new sites) are queried when a DNS request arrives from the requesting fixed location. The particular m fastest content serving sites and n other sites may be dynamically updated so as to ensure the most reliable directing of requests. This solution provides a very efficient and effective means by which to determine closest content serving sites while keeping load balancer-created traffic at a minimum.