摘要:
A method for communicating optically between nodes of an optical network, including forming, between a first node and a second node of the network, a set of lightpaths, each of the set of lightpaths having a respective configuration, and transferring communication traffic between the first and second nodes via the set of lightpaths. The method also includes forming a determination for the set of lightpaths that a communication traffic level associated therewith is less than a predetermined threshold, and in response to the determination, removing a lightpath having a given configuration from the set of lightpaths to form a reduced set of lightpaths. The method further includes transferring the communication traffic between the first and second nodes via the reduced set of lightpaths, while reducing a level of power consumption in the removed lightpath and while maintaining the given configuration of the removed lightpath.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification ('BCN') messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
Various improvements are provided for prior art policing methods, including token bucket (200) methods and virtual time policing methods. Some preferred methods of the invention involve assigning a non-zero drop probability even when the packet would otherwise have been transmitted according to a prior art policing method. For example, a non-zero drop probability may be assigned even when there are sufficient tokens in a token bucket to allow transmission of the packet. A non-zero drop probability may be assigned, for example, when a token bucket level is at or below a predetermined threshold (160) or according to a rate (205) at which a token bucket is being emptied. Some implementations involve treating a token bucket as a virtual queue wherein the number of free elements in the virtual queue is proportional to the number of remaining tokens in the token bucket. Such implementations may involve predicting a future virtual queue size according to a previous virtual queue size and using this predicted value to calculate a drop probability.
摘要:
Methods and apparatus are disclosed for scheduling packets, such as in systems having a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces. In one implementation, multiple request generators, grant arbiters, and acceptance arbiters work in conjunction to determine this scheduling. A set of requests for sending packets from a particular input is generated. From a grant starting position, a first n requests in a predetermined sequence are identified, where n is less than or equal to the maximum number of connections that can be used in a single packet time to the particular output. The grant starting position is updated in response to the first n grants including a particular grant corresponding to a grant advancement position. In one embodiment, the set of grants generated based on the set of requests is similarly determined using an acceptance starting position and an acceptance advancement position.
摘要:
Methods and apparatus are disclosed for scheduling packets, such as in systems having a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces. In one implementation, multiple request generators, grant arbiters, and acceptance arbiters work in conjunction to determine this scheduling. A set of requests for sending packets from a particular input is generated. From a grant starting position, a first n requests in a predetermined sequence are identified, where n is less than or equal to the maximum number of connections that can be used in a single packet time to the particular output. The grant starting position is updated in response to the first n grants including a particular grant corresponding to a grant advancement position. In one embodiment, the set of grants generated based on the set of requests is similarly determined using an acceptance starting position and an acceptance advancement position.
摘要:
A precaching system identifies an object, such as a media file, that a user accesses and then analyzes a social graph of the user to identify social graph contacts that may be interested in the object. Based on the content of the object—and the interests and connections of contacts in the social graph—the precaching system determines whether a particular contact in the user's social graph is likely also to access the object. For example, the precaching system may determine a hit score corresponding to the object and a likelihood that the particular contact in the social graph will access the object. If the precaching system determines that the likelihood that the particular contact will access the object meets or exceeds a threshold probability level for precaching the object, the precaching system precaches the object near the contact in anticipation that the contact will access the object.
摘要:
Techniques for improving the performance of flow control mechanisms such as Pause are provided. The techniques provide for maintaining a fair distribution of available bandwidth while also allowing for fewer packet drops, and maximizing link utilization, in a distributed system. For example, in one embodiment, techniques are provided for achieving a fair share allocation of an egress port's bandwidth across a plurality of ingress ports contending for the same egress port.