Abstract:
Technologies for tracing network performance include a network computing device configured to receive a network packet from a source endpoint node, process the received network packet, capture trace data corresponding to the network packet as it is processed by the network computing device, and transmit the received network packet to a target endpoint node. The network computing device is further configured to generate a trace data network packet that includes at least a portion of the captured trace data and transmit the trace data network packet to the destination endpoint node. The destination endpoint node is configured to monitor performance of the network by reconstructing a trace of the network packet based on the trace data of the trace data network packet. Other embodiments are described herein.
Abstract:
Technologies for filtering a received message include a receiving computing device to receive messages and a sender computing device to send messages. The receiving computing device is configured to retrieve a descriptor from a received message and retrieve another descriptor from an inspection entry of a network port entry selected from a network port entry table by the receiving computing device based on the logical network port that received the message. The receiving computing device is further configured to compare the descriptors to determine whether the descriptors match. Upon finding a match, the receiving computing device is still further configured to perform an operation corresponding to the inspection entries whose descriptor matches the descriptor of the message. Other embodiments are described and claimed.
Abstract:
Technologies for dynamic work queue management include a producer computing device communicatively coupled to a consumer computing device. The consumer computing device is configured to transmit a pop request (e.g., a one-sided pull request) that includes consumption constraints indicating an amount of work (e.g., a range of acceptable fraction of work elements to return from a work queue of the producer computing device) to pull from the producer computing device. The producer computing device is configured to determine whether the pop request can be satisfied and generate a response that includes an indication of the result of the determination and one or more producer metrics usable by the consumer computing device to determine a subsequent action to be performed by the consumer computing device upon receipt of the response message. Other embodiments are described and claimed herein.
Abstract:
Technologies for integrated thread scheduling include a computing device having a network interface controller (NIC). The NIC is configured to detect and suspend a thread that is being blocked by one or more communication operations. A thread scheduling engine of the NIC is configured to move the suspended thread from a running queue of the system thread scheduler to a pending queue of the thread scheduling engine. The thread scheduling engine is further configured to move the suspended thread from the pending queue to a ready queue of the thread scheduling engine upon determining any dependencies and/or blocking communications operations have completed. Other embodiments are described and claimed.
Abstract:
Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.
Abstract:
In an example, there is disclosed a compute node, comprising: first one or more logic elements comprising a data producer engine to produce a datum; and a host fabric interface to communicatively couple the compute node to a fabric, the host fabric interface comprising second one or more logic elements comprising a data pulling engine, the data pulling engine to: publish the datum as available; receive a pull request for the datum, the pull request comprising a node identifier for a data consumer; and send the datum to the data consumer via the fabric. There is also disclosed a method of providing a data pulling engine.
Abstract:
Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.
Abstract:
Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.
Abstract:
Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.
Abstract:
Technologies for proxy-based multithreaded message passing include a number of computing nodes in communication over a network. Each computing node establishes a number of message passing interface (MPI) endpoints associated with threads executed within a host processes. The threads generate MPI operations that are forwarded to a number of proxy processes. Each proxy process performs the MPI operation using an instance of a system MPI library. The threads may communicate with the proxy processes using a shared-memory communication method. Each thread may be assigned to a particular proxy process. Each proxy process may be assigned dedicated networking resources. MPI operations may include sending or receiving a message, collective operations, and one-sided operations. Other embodiments are described and claimed.