摘要:
Methods and apparatus for dynamic load balancing using virtual link credit accounting are disclosed. An example method includes receiving, at a network device, a data packet to be communicated using an aggregation group, the aggregation group including a plurality of virtual links having a common destination. The example method further includes determining a hash value based on the packet and determining an assigned virtual link of the plurality of virtual links based on the hash value. The example method also includes reducing a number of available transmission credits for the aggregation group and reducing a number of available transmission credits for the assigned virtual link. The example method still further includes communicating the packet to another network device using the assigned virtual link.
摘要:
Methods and apparatus for. An example method includes determining, by a network device, respective quality metrics for each of a plurality of members of an aggregation group of the network device, the respective quality metrics representing respective data traffic loading for each member of the aggregation group. The example method further includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metrics. The example method also includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to the other members of the aggregation group.
摘要:
Methods and apparatus for. An example method includes determining, by a network device, respective quality metrics for each of a plurality of members of an aggregation group of the network device, the respective quality metrics representing respective data traffic loading for each member of the aggregation group. The example method further includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metrics. The example method also includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to the other members of the aggregation group.
摘要:
The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue.
摘要:
A system and method for adjusting an energy efficient Ethernet (EEE) control policy using measured power savings. An EEE-enabled device can be designed to report EEE event data. This reported EEE event data can be used to quantify the actual EEE benefits of the EEE-enabled device, debug the EEE-enabled device, and adjust the EEE control policy.
摘要:
Disclosed are various embodiments that provide an architecture of memory buffers for a network component configured to process packets. A network component may receive a packet, the packet being associated with a control structure and packet data, an input port set and an output port set. The network component determines one of a plurality of control structure memory partitions for writing the control structure, the one of the plurality of control structure memory partitions being determined based at least upon the input port set and the output port set; and determines one of a plurality of packet data memory partitions for writing the packet data, the one of the plurality of packet data memory partitions being determined independently of the input port set.
摘要:
The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue.
摘要:
Methods and apparatus for dynamic load balancing are disclosed. An example method includes receiving, at a network device, a data packet to be sent via an aggregation group, where the aggregation group comprising a plurality of aggregate members. The example method further includes determining, based on the data packet, a flow identifier of a flow to which the data packet belongs and determining a state of the flow. The example method also includes determining, based on the flow identifier and the state of the flow, an assigned member of the plurality of aggregate members for the flow and communicating the packet via the assigned member.
摘要:
Methods and apparatus for dynamic bandwidth allocation are disclosed. An example method includes determining, by a network device, at least one of a congestion state of a packet memory buffer of the network device and a congestion state of an external packet memory that is operationally coupled with the network device. The example method further includes dynamically adjusting, by the network device, respective bandwidth allocations for read and write operations between the network device and the external packet memory, the dynamic adjusting being based on the determined congestion state of the packet memory buffer and/or the determined congestion state of the external packet memory.
摘要:
According to one general aspect, a method may include receiving data from a network device. In some embodiments, the method may include writing the data to a memory bank that is part of a plurality of at least single-ported memory banks that have been grouped to act as a single at least dual-ported aggregated memory element. In various embodiments, the method may include monitoring the usage of the plurality of memory banks. In one embodiment, the method may include, based upon a predefined set of criteria, placing a memory bank that meets the predefined criteria in a low-power mode.