Abstract:
This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fibre Channel and/or other proprietary technologies, etc.
Abstract:
One embodiment provides a method for enabling class-based credit flow control for a network node in communication with a link partner using an Ethernet communications protocol. The method includes receiving a control frame from the link partner. The control frame includes at least one field for specifying credit for at least one traffic class and the credit is based on available space in a receive buffer associated with the at least one traffic class. The method further includes sending data packets to the link partner based on the credit, the data packets associated with the at least one traffic class.
Abstract:
Systems and methods may provide for determining a local traffic quota for a service associated with an overlay network and determining an allocation of the local traffic quota across a set of data sources associated with the overlay network. Additionally, the allocation may be imposed on one or more packets received from the set of data sources. In one example, imposing the allocation on the one or more packets includes sending the one or more packets to a parent node connected to the overlay router in a hierarchy of the overlay network if delivery of the one or more packets to the parent node complies with the allocation and delaying delivery of the one or more packets to the parent node if the packets do not comply with the allocation.
Abstract:
Methods, apparatus and network architectures relating to the use of a Hop-by-Hop packet forwarding technique using “stepping stone” switches. The network architectures include use of stepping stone switches interspersed with non-stepping stone switches such as conventional network switches comprising network elements such switches, routers, repeaters, etc. The stepping stone switches are configured to route packets as multiplexed flows along tunneled sub-paths between stepping stone switches in a hop-by-hop manner with error recovery, as opposed to conventional routing under which packets are routed from a source to a destination using an arbitrary path or along a (generally) lengthy flow-based path. Accordingly, packets from a source endpoint are routed to a destination endpoint via multiple sub-paths connecting pairs of stepping stone switches, with each sub-path traversing one or more conventional switches and constituting a logical Hop in the Hop-by-Hop route.
Abstract:
Methods, apparatus, and networks configured to manage network congestion using packet recirculation. The networks employ network elements (e.g., Rbridges in Layer 2 networks and switches/routers in Layer 3 networks) that are configured to support multi-path forwarding under which packets addressed to the same destination may be routed via multiple paths to the destination. In response to network congestion conditions, such as lack of availability of a non-congested port via which a shortest path to the destination may be accessed, a packet may be routed backward toward a source node or forwarded toward a destination along a non-shortest path. The network elements may employ loopback buffers for looping packets back toward a source via the same link the packet is received on.
Abstract:
In accordance with some embodiments, identification of transport streams facilitates the classification of those streams. Classification of those streams in turn enables a classification to be matched to a quality of service policy. Thus, quality of service policies may be enforced so that different streams can be afforded appropriate quality of service.
Abstract:
Methods, apparatus, and systems for distributing network loads in a manner that is resilient to system topology changes. Distribution functions and associated operations are implemented on multiple load splitters such that if a load splitter becomes inoperative, another or other load splitters can forward packets corresponding to flows previously handled by the inoperative load splitter without requiring flow state synchronization to be maintained across load splitters. The distribution functions are implemented in a manner that distributes packets for the same flows to the same servers through system topology changes, addressing both situations when servers fail and/or are taken off-line and when such servers or replacement servers are brought back on-line. The techniques are facilitated, in part, via use of redistributed flow lists and/or Bloom filters that are marked to track redistributed flows. A novel Bloom filter recycle scheme is also disclosed.
Abstract:
Systems and methods may provide for determining a local traffic quota for a service associated with an overlay network and determining an allocation of the local traffic quota across a set of data sources associated with the overlay network. Additionally, the allocation may be imposed on one or more packets received from the set of data sources. In one example, imposing the allocation on the one or more packets includes sending the one or more packets to a parent node connected to the overlay router in a hierarchy of the overlay network if delivery of the one or more packets to the parent node complies with the allocation and delaying delivery of the one or more packets to the parent node if the packets do not comply with the allocation.
Abstract:
Methods and apparatus for implementing notification by network elements of packet drops. In response to determining a packet is to be dropped, a network element such as a switch or router determines the source of the packet and returns a dropped packet notification message to the source. Upon receipt of notification, networking software or embedded hardware on the source causes the dropped packet to be retransmitted. The notification may also be sent from the network element to the destination computer to inform networking software or embedded logic implemented by the destination computer that the packet was dropped and notification to the source has been sent, thus alleviating the destination from needing to send a Selective ACKnowledge (SACK) message to inform the source the packet was not delivered. (Too narrow)
Abstract:
Methods and apparatus for multiplexing many client streams over a single connection. A proxy server establishes multiple TCP connections with respective clients that desire to access a web server connected to the proxy server via a multiplexed TCP connection. TCP packets received from the clients via the TCP connections are separated out based on their TCP connections, a packet payload data is extracted and added to client data streams. Data segments comprising sequential runs of bits from the client data streams and embedded in multiplexed (MUX) TCP packets that are sent over the multiplexed TCP connection. Upon receipt, the web server de-encapsulates the data segments and buffers them in queues allocated for each TCP connection in re-assembled client data streams. This enables the packet flows transported over the multiplexed connection for the TCP connections to be individually controlled. The multiplexed TCP connection may also be used for forwarding packet payload data generated at the web server to the clients via the proxy server and the client's TCP connections.