摘要:
Methods and apparatus for implementing flow control with reduced buffer usage for network devices. In response to detection of flow control events, transmission of a data unit or segment such as an Ethernet frame is preempted in favor of a flow control message, resulting in aborting transmission of the frame. Data corresponding to the entirety of the frame is buffered at the transmitting station until the frame has been transmitted (or after a delay), enabling retransmission of the aborted frame. Preemption of frames in favor of flow control messages results in earlier responses to flow control events, enabling the size of buffers to be reduced.
摘要:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
摘要:
A method and apparatus to reduce memory required in a network interface controller to store per flow state information associated with a network connection is provided. Instead of storing per flow state information for a connection in the network interface controller at an endpoint of the connection, the per flow state information for the connection is stored in memory external to the network interface controller. The stored state information is conveyed in a packet by the network interface controller between the endpoints of the connection. For a Transmission Control Protocol (TCP) connection, the state information is conveyed between the endpoints of the TCP connection in a TCP option included in the TCP header in the packet.
摘要:
Examples are disclosed for use of vendor defined messages to execute a command to access a storage device maintained at a server. In some examples, a network input/output device coupled to the server may receive the command from a client remote to the server for the client to access the storage device. For these examples, elements or components of the network input/output device may be capable of forwarding the command either directly to a Non-Volatile Memory Express (NVMe) controller that controls the storage device or to a manageability module coupled between the network input/out device and the NVMe controller. Vendor specific information may be forwarded with the command and used by either the NVMe controller or the manageability module to facilitate execution of the command. Other examples are described and claimed.
摘要:
Methods and apparatus are disclosed for virtualizable, forward-compatible hardware-software interfaces. Embodiments may be used in a driver whether it is a physical driver or a virtual driver. Commands are queued from the driver and fetched to the device. An actions table is accessed to determine if drivers are permitted to perform commands. Events are queued for the drivers responsive to commands. If drivers are not permitted to perform a command, device firmware may forward the command to a privileged driver to perform the required command. If a driver is only permitted to perform a command with assistance the command is forwarded for corrections and execution. If a command is to be dropped, a completion event may be queued as if the command had executed. Drivers may have no indication of which actions were taken. The actions table may be changed for hardware/software modifications or dynamically according to configuration changes.
摘要:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
摘要:
Examples are disclosed for client access to a storage medium coupled with a server. A network input/output device for the server may receive a remote direct memory access (RDMA) command including a steering tag (S-Tag) from a client remote to the server. For these examples, the network input/output device may forward the RDMA command to a Non-Volatile Memory Express (NVMe) controller and access provided to a storage medium based on an allocation scheme that assigned the S-Tag to the storage medium. In some other examples, an NVMe controller may generate a memory mapping of one or more storage devices controlled by the NVMe controller to addresses for a base address register (BAR) on a Peripheral Component Interconnect Express (PCIe) bus. PCIe memory access commands received by the NVMe controller may be translated based on the memory mapping to provide access to the storage device. Other examples are described and claimed.
摘要:
Methods and apparatus are disclosed for virtualizable, forward-compatible hardware-software interfaces. Embodiments may be used in a driver whether it is a physical driver or a virtual driver. Commands are queued from the driver and fetched to the device. An actions table is accessed to determine if drivers are permitted to perform commands. Events are queued for the drivers responsive to commands. If drivers are not permitted to perform a command, device firmware may forward the command to a privileged driver to perform the required command. If a driver is only permitted to perform a command with assistance the command is forwarded for corrections and execution. If a command is to be dropped, a completion event may be queued as if the command had executed. Drivers may have no indication of which actions were taken. The actions table may be changed for hardware/software modifications or dynamically according to configuration changes.
摘要:
Technologies for providing FPGA infrastructure-as-a-service include a computing device having an FPGA, scheduler logic, and design loader logic. The scheduler logic selects an FPGA application for execution and the design loader logic loads a design image into the FPGA. The scheduler logic receives a ready signal from the FGPA in response to loading the design and sends a start signal to the FPGA application. The FPGA executes the FPGA application in response to sending the start signal. The scheduler logic may time-share the FPGA among multiple FPGA applications. The computing device may include signaling logic to manage signals between a user process and the FPGA application and DMA logic to manage bulk data transfer between the user process and the FPGA application. The computing device may include a user process linked to an FGPA library executed by a processor of the computing device. Other embodiments are described and claimed.
摘要:
An embodiment may include circuitry that may be capable of performing operations that may include generating, at least in part, at least one message to announce that at least one network node (1) is requesting, at least in part, that one or more transmissions to the at least one network node be postponed, at least in part, and/or (2) is entering, at least in part after issuance of the at least one message, a relatively lower power state relative to a relatively higher power state. Additionally or alternatively, the operations may include, in response, at least in part, to the at least one message, postponing, at least in part, at least one intermediate node at least one transmission (received by the at least one intermediate node) to the at least one network node. Many alternatives, variations, and/or modifications are possible without departing from this embodiment.