Abstract:
Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.
Abstract:
An example method for facilitating policy-driven storage in a microserver computing environment is provided and includes receiving, at an input/output (I/O) adapter in a microserver chassis having a plurality of compute nodes and a shared storage resource, policy contexts prescribing storage access parameters of respective compute nodes and enforcing the respective policy contexts on I/O operations by the compute nodes, in which respect a particular I/O operation by any compute node is not executed if the respective policy context does not allow the particular I/O operation. The method further includes allocating tokens to command descriptors associated with I/O operations for accessing the shared storage resource, identifying a violation of any policy context of any compute node based on availability of the tokens, and throttling I/O operations by other compute nodes until the violation disappears.
Abstract:
Presented herein are techniques for virtualizing functions of a Non-Volatile Memory Express (NVMe) controller that manages access to non-volatile memory such as a solid state drive. An example method includes receiving, at a Peripheral Component Interconnect Express (PCIe) interface card that is in communication with a PCIe bus, configuration information for virtual interfaces that support a non-volatile memory express interface protocol, wherein the virtual interfaces virtualize a NVMe controller, configuring the virtual interfaces in accordance with the configuration information, presenting the virtual interfaces to the PCIe bus, and receiving, by at least one of the virtual interfaces, from a host in communication with the at least one of the virtual interfaces via the PCIe bus, a message for a queue of the at least one of the virtual interfaces that is mapped to a queue of the non-volatile memory express controller.
Abstract:
An example method for adaptively coalescing remote direct memory access (RDMA) acknowledgements is provided. The method includes determining one or more input/output (I/O) characteristics of RDMA packets of a plurality of queue pairs (QPs) on a per-QP basis, each QP identifying a respective RDMA connection between a respective first compute node and a respective second compute node. The method further includes determining an acknowledgement frequency for providing acknowledgements of the RDMA packets on a per-QP basis (i.e., a respective acknowledgement frequency is set for each QP) based on the determined one or more I/O characteristics for each QP.
Abstract:
An example method for facilitating policy-driven storage in a microserver computing environment is provided and includes receiving, at an input/output (I/O) adapter in a microserver chassis having a plurality of compute nodes and a shared storage resource, policy contexts prescribing storage access parameters of respective compute nodes and enforcing the respective policy contexts on I/O operations by the compute nodes, in which respect a particular I/O operation by any compute node is not executed if the respective policy context does not allow the particular I/O operation. The method further includes allocating tokens to command descriptors associated with I/O operations for accessing the shared storage resource, identifying a violation of any policy context of any compute node based on availability of the tokens, and throttling I/O operations by other compute nodes until the violation disappears.
Abstract:
A method is provided in one example embodiment and includes receiving by a network element a request from a network device connected to the network element to update a shared resource maintained by the network element; subsequent to the receipt, identifying a Base Address Register Resource Table (“BRT”) element assigned to a Peripheral Component Interconnect (“PCI”) adapter of the network element associated with the network device, wherein the BRT points to the shared resource; changing an attribute of the identified BRT from read-only to read/write to enable the identified BRT to be written by the network device; and notifying the network device that the attribute of the identified BRT has been changed, thereby enabling the network device to update the shared resource via a Base Address Register (“BAR”) comprising the identified BRT.
Abstract:
An example method for facilitating remote memory access with memory mapped addressing among multiple compute nodes is executed at an input/output (IO) adapter in communication with the compute nodes over a Peripheral Component Interconnect Express (PCIE) bus, the method including: receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node; generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a base address register (BAR) of the second compute node; and configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
Abstract:
Systems and methods for connecting a device to one of a plurality of processing hosts. A virtual interface card (VIC) adapter learns the number and location of the hosts and an identification of the device; receives a mapping of the device to a selected host where in the host is selected from the plurality of hosts; and dynamically builds an interface that connects the device to the selected host.
Abstract:
An example method for facilitating low latency remote direct memory access (RDMA) for microservers is provided and includes generating queue pair (QPs) in a memory of an input/output (I/O) adapter of a microserver chassis having a plurality of compute nodes executing thereon, the QPs being associated with a remote direct memory access (RDMA) connection between a first compute node and a second compute node in the microserver chassis, setting a flag in the QPs to indicate that the RDMA connection is local to the microserver chassis, and performing a loopback of RDMA packets within the I/O adapter from one memory region in the I/O adapter associated with the first compute node of the RDMA connection to another memory region in the I/O adapter associated with the second compute node of the RDMA connection.
Abstract:
Systems and methods for connecting a device to one of a plurality of processing hosts. A virtual interface card (VIC) adapter learns the number and location of the hosts and an identification of the device; receives a mapping of the device to a selected host where in the host is selected from the plurality of hosts; and dynamically builds an interface that connects the device to the selected host.