Abstract:
Method, apparatus, and systems for implementing flexible credit exchange within high performance fabrics. Available buffer space in a receive buffer on a receive-side of a link is managed and tracked at the transmit-side of the link using credits. Peer link interfaces coupled via a link are provided with receive buffer configuration information that specifies how the receive buffer space in each peer is partitioned and space allocated for each buffer, including a plurality of virtual lane (VL) buffers. Credits are used for tracking buffer space consumption and in credits are returned from the receive-side indicating freed buffer space. The peer link interfaces exchange credit organization information to inform the other peer of how much space each credit represents. In connection with data transfer over the link, the transmit-side de-allocates credits based on an amount of buffer space to be consumed in applicable buffers in the receive buffer. Upon space being freed in the receive buffer, the receive-side returns credit ACKnowledgements (ACKs) identifying a VL for which space has been freed.
Abstract:
Method, apparatus, and systems for implementing Quality of Service (QoS) within high performance fabrics. A multi-level QoS scheme is implemented including virtual fabrics, Traffic Classes, Service Levels (SLs), Service Channels (SCs) and Virtual Lanes (VLs). SLs are implemented for Layer 4 (Transport Layer) end-to-end transfer of fabric packets, while SCs are used to differentiate fabric packets at the Link Layer. Fabric packets are divided into flits, with fabric packet data transmitted via fabric links as flits streams. Fabric switch input ports and device receive ports detect SC IDs for received fabric packets and implement SC-to-VL mappings to determine VL buffers to buffer fabric packet flits in. An SL may have multiple SCs, and SC-to-SC mapping may be implemented to change the SC for a fabric packet as it is forwarded through the fabric, while maintaining its SL. A Traffic Class may include multiple SLs, enabling request and response traffic for an application to employ separate SLs.
Abstract:
Methods, apparatus, and systems for implementing hierarchical and lossless packet preemption and interleaving to reduce latency jitter in flow-controller packet-based networks. Fabric packets are divided into a plurality of data units, with data units for different fabric packets buffered in separate buffers. Data units are pulled from the buffers and added to a transmit stream in which groups of data units are interleaved. Upon receipt by a receiver, the groups of data units are separated out and buffered in separate buffers under which data units for the same fabric packets are grouped together. In one aspect, each buffer is associated with a respective virtual lane (VL), and the fabric packets are effectively transferred over fabric links using virtual lanes. VLs may have different levels of priority under which data units for fabric packets in higher-priority VLs may preempt fabric packets in lower-priority VLs. By transferring data units rather than entire packets, transmission of a packet can be temporarily paused in favor of a higher-priority packet. Multiple levels of preemption and interleaving in a nested manner are supported.
Abstract:
Method, apparatus, and systems for reliably transferring Ethernet packet data over a link layer and facilitating fabric-to-Ethernet and Ethernet-to-fabric gateway operations at matching wire speed and packet data rate. Ethernet header and payload data is extracted from Ethernet frames received at the gateway and encapsulated in fabric packets to be forwarded to a fabric endpoint hosting an entity to which the Ethernet packet is addressed. The fabric packets are divided into flits, which are bundled in groups to form link packets that are transferred over the fabric at the Link layer using a reliable transmission scheme employing implicit ACKnowledgements. At the endpoint, the fabric packet is regenerated, and the Ethernet packet data is de-encapsulated. The Ethernet frames received from and transmitted to an Ethernet network are encoded using 64b/66b encoding, having an overhead-to-data bit ratio of 1:32. Meanwhile, the link packets have the same ratio, including one overhead bit per flit and a 14-bit CRC plus a 2-bit credit return field or sideband used for credit-based flow control.