Abstract:
A method includes receiving in a network switch of a communication network communication traffic that originates from a source node and arrives over a route through the communication network traversing one or more preceding network switches, for forwarding to a destination node. In response to detecting in the network switch a compromised ability to forward the communication traffic to the destination node, a notification is sent to the preceding network switches. The notification is to be consumed by the preceding network switches and requests the preceding network switches to modify the route so as not to traverse the network switch.
Abstract:
In one embodiment, a network switch device includes a network interface to receive vectors from endpoint devices, and an aggregation and reduction accelerator to perform elementwise and vector splitting operations with the vectors as input yielding at least two intermediate vector results, wherein the network interface is to send the at least two intermediate vector results to different corresponding network switches in different switch aggregation trees, receive at least two final vector results of an aggregation and reduction process from the different switch aggregation trees, and combine the at least two final vector results to yield a combined final vector result, wherein the network interface is to send the combined final vector result to the endpoint devices.
Abstract:
Systems and methods are described herein for processing data packets. An example network adapter may include a network interface operatively coupled to a communication network and packet processing circuitry operatively coupled to the network interface. The packet processing circuitry is configured to receive, via the network interface, a plurality of data packets associated with a message; determine, for each data packet, at least one corresponding reserved stride in a strided buffer; store each data packet in the at least one corresponding reserved stride; process the strided buffer upon storing the plurality of data packets in a corresponding plurality of reserved strides; and generate a completion notification indicating that the plurality of data packets in the strided buffer has been processed.
Abstract:
A processing device includes an interface and one or more processing circuits. The interface is to connect to a host processor. The one or more processing circuits are to receive from the host processor, via the interface, a notification specifying an operation for execution by the processing device, the operation including (i) multiple tasks that are executable by the network device, and (ii) execution dependencies among the tasks, in response to the notification, to determine a schedule for executing the tasks, the schedule complying with the execution dependencies, and to execute the operation by executing the tasks of the operation in accordance with the schedule.
Abstract:
A network device includes a first interface, a second interface, and circuitry. The first interface is configured to communicate at least with a memory. The second interface is configured to communicate over a network with a peer network device. The circuitry is configured to receive a request to transfer data over the network between the memory and the peer network device in accordance with (i) a pattern of offsets to be accessed in the memory and (ii) a memory key representing a memory space to be accessed using the pattern, and to transfer the data in accordance with the request.
Abstract:
A network device includes a network interface, a host interface and processing circuitry. The network interface is configured to connect to a communication network. The host interface is configured to connect to a host including a processor. The processing circuitry is configured to receive from the processor, via the host interface, a notification specifying an operation for execution by the network device, the operation including (i) multiple tasks that are executable by the network device, and (ii) execution dependencies among the tasks in response to the notification, the processing circuitry is configured to determine a schedule for executing the tasks, the schedule complying with the execution dependencies, and to execute the operation by executing the tasks of the operation is accordance with the schedule.
Abstract:
A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry.
Abstract:
In a fabric of network elements one network element has an object pool to be accessed stored in its memory. A request for atomic access to the object pool by another network element is carried out by transmitting the request through the fabric to the one network element, performing a remote direct memory access to a designated member of the object pool, atomically executing the request, and returning a result of the execution of the request through the fabric to the other network element.
Abstract:
A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation.
Abstract:
Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.