Abstract:
Method, apparatus, and systems for reliably transferring Ethernet packet data over a link layer and facilitating fabric-to-Ethernet and Ethernet-to-fabric gateway operations at matching wire speed and packet data rate. Ethernet header and payload data is extracted from Ethernet frames received at the gateway and encapsulated in fabric packets to be forwarded to a fabric endpoint hosting an entity to which the Ethernet packet is addressed. The fabric packets are divided into flits, which are bundled in groups to form link packets that are transferred over the fabric at the Link layer using a reliable transmission scheme employing implicit ACKnowledgements. At the endpoint, the fabric packet is regenerated, and the Ethernet packet data is de-encapsulated. The Ethernet frames received from and transmitted to an Ethernet network are encoded using 64b/66b encoding, having an overhead-to-data bit ratio of 1:32. Meanwhile, the link packets have the same ratio, including one overhead bit per flit and a 14-bit CRC plus a 2-bit credit return field or sideband used for credit-based flow control.
Abstract:
In an embodiment, at least one interface mechanism may be provided. The mechanism may permit, at least in part, at least one process allocate, at least in part, and/or configure, at least in part, at least one network-associated object. Such allocation and/or configuration, at least in part, may be in accordance with at least one parameter set that may correspond, at least in part, to at least one query issued by the at least one process via the mechanism. Many modifications are possible without departing from this embodiment.
Abstract:
Disclosed herein are high performance systems with low latency error correction as well as related devices and methods. In some embodiments, high performance systems may include: central processing units, adapter chips, and switch chips connected via channels, each chip including link level forward error correction and link level replay, where errors at or below a threshold level are corrected by forward error correction and remaining errors are corrected using replay. In some embodiments, high performance systems may include: central processing units, adapter chips, and switch chips connected via channels, each chip including link level forward error correction, link level replay, and a multiplexer for determining which error correction technique to use based on the number of errors and an error threshold level.
Abstract:
In an embodiment, at least one interface mechanism may be provided. The mechanism may permit, at least in part, at least one process allocate, at least in part, and/or configure, at least in part, at least one network-associated object. Such allocation and/or configuration, at least in part, may be in accordance with at least one parameter set that may correspond, at least in part, to at least one query issued by the at least one process via the mechanism. Many modifications are possible without departing from this embodiment.
Abstract:
Technologies for fabric security include one or more managed network devices coupled to one or more computing nodes via high-speed fabric links. A managed network device enables a port and, while enabling the port, securely determines the node type of the link partner coupled to the port. If the link partner is a computing node, management access is not allowed at the port. The managed network device may allow management access at certain predefined ports, which may be connected to one of more management nodes. Management access may be allowed for additional ports in response to management messages received from the management nodes. The managed network device may check and verify data packet headers received from a compute node at each port. The managed network device may rate-limit management messages received from a compute node at each port. Other embodiments are described and claimed.
Abstract:
Method, apparatus, and systems for reliably transferring Ethernet packet data over a link layer and facilitating fabric-to-Ethernet and Ethernet-to-fabric gateway operations at matching wire speed and packet data rate. Ethernet header and payload data is extracted from Ethernet frames received at the gateway and encapsulated in fabric packets to be forwarded to a fabric endpoint hosting an entity to which the Ethernet packet is addressed. The fabric packets are divided into flits, which are bundled in groups to form link packets that are transferred over the fabric at the Link layer using a reliable transmission scheme employing implicit ACKnowledgements. At the endpoint, the fabric packet is regenerated, and the Ethernet packet data is de-encapsulated. The Ethernet frames received from and transmitted to an Ethernet network are encoded using 64b/66b encoding, having an overhead-to-data bit ratio of 1:32. Meanwhile, the link packets have the same ratio, including one overhead bit per flit and a 14-bit CRC plus a 2-bit credit return field or sideband used for credit-based flow control.
Abstract:
Technologies for scalable local addressing include one or more managed network devices coupled to one or more computing nodes via high-speed fabric links. A computing node may transmit a data packet including a destination local identifier (DLID) that identifies the destination computing node. The DLID may be 32, 24, 20, or 16 bits wide. The managed network device may determine whether the DLID is within a configurable multicast address space and, if so, forward the data packet to a multicast group. The managed network device may also determine whether the DLID is within a configurable collective address space and, if so, perform a collective acceleration operation. The number of top-most bits set in a multicast mask and the number of additional top-most bits set in a collective mask may be configured. Multicast LIDs may be converted between different bit lengths. Other embodiments are described and claimed.
Abstract:
Disclosed herein are high performance systems with low latency error correction as well as related devices and methods. In some embodiments, high performance systems may include: central processing units, adapter chips, and switch chips connected via channels, each chip including link level forward error correction and link level replay, where errors at or below a threshold level are corrected by forward error correction and remaining errors are corrected using replay. In some embodiments, high performance systems may include: central processing units, adapter chips, and switch chips connected via channels, each chip including link level forward error correction, link level replay, and a multiplexer for determining which error correction technique to use based on the number of errors and an error threshold level.
Abstract:
In an embodiment, at least one interface mechanism may be provided. The mechanism may permit, at least in part, at least one process allocate, at least in part, and/or configure, at least in part, at least one network-associated object. Such allocation and/or configuration, at least in part, may be in accordance with at least one parameter set that may correspond, at least in part, to at least one query issued by the at least one process via the mechanism. Many modifications are possible without departing from this embodiment.