Abstract:
Various methods and systems are provided for oversubscription buffer management. In one embodiment, among others, a method for oversubscription control determines a utilization level of an oversubscription buffer that is common to a plurality of ingress ports and initiates adjustment of an ingress packet rate of the oversubscription buffer in response to the utilization level. In another embodiment, a method determines an occupancy level of a virtual oversubscription buffer associated with an oversubscription buffer and initiates adjustment of an ingress packet rate in response to the occupancy level. In another embodiment, a rack switch includes an oversubscription buffer configured to receive packets from a plurality of ingress ports and provide the received packets for processing by the rack switch and a packet flow control configured to monitor an occupancy level of the oversubscription buffer and to initiate adjustment of an ingress packet rate in response to the occupancy level.
Abstract:
Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include speculative flow status messaging, for example. The speculative flow status messaging may alert an egress tile or output port of an incoming packet before the incoming packet is fully received. The processing techniques may also include implementing a separate accelerated credit pool which provides controlled push capability for the ingress tile or input port to send packets to the egress tile or output port without waiting for a bandwidth credit from the egress tile or output port.
Abstract:
Disclosed are various embodiments that relate to a network switch. The switch determines whether a network packet is associated with a packet processing context, the packet processing context specifying a condition of handling network packets processed in the switch. The switch determines debug metadata for the network packet in response to the network packet being associated with the packet processing context; and the debug metadata is stored in a capture buffer.
Abstract:
A system and method for adjusting an energy efficient Ethernet (EEE) control policy using measured power savings. An EEE-enabled device can be designed to report EEE event data. This reported EEE event data can be used to quantify the actual EEE benefits of the EEE-enabled device, debug the EEE-enabled device, and adjust the EEE control policy.
Abstract:
A system and method to monitor network congestion is provided. The system includes a plurality of ingress, egress ports, and a plurality of queues coupled to the ingress and egress ports and configured to store incoming and outgoing packets. The system also includes a monitoring unit configured to monitor at least one attribute of packets in at least one queue when a start condition occurs, stop monitoring the attribute when an end condition occurs, determine a flow that caused the start condition based on the monitored attribute, and report the monitored attribute and the flow.
Abstract:
Various methods and systems are provided for traffic flow management within distributed traffic. In one example, among others, a distributed system includes egress ports supported by nodes of the distributed system, cut-through tokens (c-tokens) including an indication of eligibility of the corresponding egress port to handle cut-through traffic, and a cut-through control ring to pass the c-tokens between the nodes. In another example, a method includes determining whether an egress port is available to handle cut-through traffic based upon a corresponding c-token, claiming the egress port for transmission of at least a portion of a packet, and routing it to the claimed egress port for transmission. In another example, a distributed system includes a first node configured to modify an eligibility indication of a c-token before transmission to a second node configured to route at least a portion of a packet based at least in part upon the eligibility indication.
Abstract:
A switching device is operable to mitigate bandwidth degradation while it is oversubscribed. Due to a latency involved with notifying a scheduler that a queue has transitioned from an active state to an empty state, the scheduler may inadvertently schedule an empty queue for processing, which may result in a degradation of bandwidth of the switching device. To avoid such degradation, the switching device may be configured to control the flow of data provided from the queue to the scheduler so that the data is provided to the scheduler as a burst transaction. For example, the switching device may be configured to delay the provision of certain indicators provided by a queue in order to defer the notification to the scheduler of when the queue receives and stores data. This may enable the queue to store more data, which can be provided to the scheduler as a burst transaction.
Abstract:
Disclosed are various embodiments that provide an architecture of memory buffers for a network component configured to process packets. A network component may receive a packet, the packet being associated with a control structure and packet data, an input port set and an output port set. The network component determines one of a plurality of control structure memory partitions for writing the control structure, the one of the plurality of control structure memory partitions being determined based at least upon the input port set and the output port set; and determines one of a plurality of packet data memory partitions for writing the packet data, the one of the plurality of packet data memory partitions being determined independently of the input port set.
Abstract:
Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include speculative flow status messaging, for example. The speculative flow status messaging may alert an egress tile or output port of an incoming packet before the incoming packet is fully received. The processing techniques may also include implementing a separate accelerated credit pool which provides controlled push capability for the ingress tile or input port to send packets to the egress tile or output port without waiting for a bandwidth credit from the egress tile or output port.
Abstract:
A system for multicast switching for distributed devices may include an ingress node including an ingress memory and an egress node including an egress memory, where the ingress node is communicatively coupled to the egress node. The ingress node may be operable to receive a portion of a multicast frame over an ingress port, bypass the ingress memory and provide the portion to the egress node when the portion satisfies an ingress criteria, otherwise receive and store the entire frame in the ingress memory before providing the frame to the egress node. The egress node may be operable to receive the portion from the ingress node, bypass the egress memory for the portion and provide the portion to the first egress port when an egress criteria is satisfied, otherwise receive and store the entire multicast frame in the egress memory before providing the multicast frame to an egress port.