Abstract:
Apparatus, method and system for supporting Remote Direct Memory Access (RDMA) Read V2 Request and Response messages using the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate a new RDMA Read V2 Request message and generate a new RDMA Read V2 Response message in response to a received RDMA Read V2 Request message, and send the messages to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process RDMA Read V2 Response messages received from the RDMA remote peer, and to write data contained in the messages to appropriate locations using DMA transfers from buffers on the RNIC into system memory. In addition, the new semantics removes the need for extra operations to grant and revoke remote access rights.
Abstract:
Apparatus, methods and systems for supporting Send with Immediate Data messages using Remote Direct Memory Access (RDMA) and the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate different types of Send with Immediate Data messages, each including a header with a unique RDMA opcode identifying the type of Send with Immediate Data message, and send the message to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process the Send with Immediate Data messages received from the RDMA remote peer. The Send with Immediate Data messages include a Send with Immediate Data message, a Send with Invalidate and Immediate Data message, a Send with Solicited Event (SE) and Immediate Data message, and a Send with Invalidate and SE and Immediate Data message.
Abstract:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
Abstract:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
Abstract:
Technologies for latency based service level agreement (SLA) management in remote direct memory access (RDMA) networks include multiple compute devices in communication via a network switch. A compute device determines a service level objective (SLO) indicative of a guaranteed maximum latency for a percentage of RDMA requests of an RDMA session. The compute device receives latency data indicative of latency of an RDMA request from a host device. The compute device determines a priority associated with the RDMA request as a function of the SLO and the latency data. The compute device schedules the RDMA request based on the priority. The network switch may allocate queue resources to the RDMA request based on the priority, reclaim the queue resources after the RDMA request is scheduled, and then return the queue resources to a free pool. Other embodiments are described and claimed.
Abstract:
Generally, this disclosure relates to a method of flow control. The method may include determining a server load in response to a request from a client; selecting a type of credit based at least in part on server load; and sending a credit to the client based at least in part on server load, wherein server load corresponds to a utilization level of a server and wherein the credit corresponds to an amount of data that may be transferred between the server and the client and the credit is configured to decrease over time if the credit is unused by the client.
Abstract:
Examples are disclosed for replicating data between storage servers. In some examples, a network input/output (I/O) device coupled to either a client device or to a storage server may exchange remote direct memory access (RDMA) commands or RDMA completion commands associated with replicating data received from the client device. The data may be replicated to a plurality of storage servers interconnect to each other and/or the client device via respective network communication links. Other examples are described and claimed.
Abstract:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
Abstract:
An apparatus comprising a first computing platform including a processor to execute a first trusted executed environment (TEE) to host a first plurality of virtual machines and a first network interface controller to establish a trusted communication channel with a second computing platform via an orchestration controller.