摘要:
In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
摘要:
In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
摘要:
In one embodiment, a method is provided. The method of this embodiment provides performing packet processing on a packet, and placing the packet in a placement queue; if no read buffer is available, determining if the size of the placement queue exceeds a threshold polling value; and if the size of the placement queue exceeds the threshold polling value: if there are one or more pending DMM (data movement module) requests, polling a DMM to determine if the DMM has completed the one or more pending DMM requests for data associated with an application; and if the DMM has completed the one or more pending DMM requests, then sending a completion notification to the application to receive the data.
摘要:
In one embodiment, a method is provided. The method of this embodiment provides receiving an indication on a network component that one or more packets have been received from a network; the network component notifying a TCP-A (transport control protocol—accelerated) driver that the one or more packets have arrived; a TCP-A driver performing packet processing for at least one of the one or more packets; and the TCP-A driver performing one or more operations that result in a data movement module placing one or more corresponding payloads of the at least one of the one or more packets into a read buffer.
摘要:
Methods and apparatus for end-to-end data plane offloading for distributed storage using protocol hardware and Protocol Independent Switch Architecture (PISA) devices. Hardware-based data plane forwarding is implemented in compute and storage switches that comprise smart server switches running software executing in a kernel and user space. The compute switch is coupled to one or more compute servers/nodes and the storage server is coupled to one or more storage servers or storage arrays. The hardware-based data plane forwarding facilitates an end-to-end data plane between the computer server(s) and storage server(s)/array(s) that is offloaded to hardware. In one example the software comprises Ceph components used to implement control plane operations in connection with hardware offloaded data plane operations, and storage traffic employs the NVMe-oF protocol and the kernels include NVMe-oF modules. In one aspect the hardware-based data plane forwarding is implemented using programmable P4switch chips. In one aspect the storage and server switches are Top of Rack (ToR) switches.
摘要:
Processor affinity of an application/thread may be used to deliver an interrupt caused by the application/thread to a best processor at runtime. The processor to which the interrupt is delivered may either run the target application/thread or be located in the same socket as the processor that runs the target application/thread. The processor affinity of the application/thread may be pushed down at runtime to a network device, a chipset, a memory control hub (“MCH”), or an input/output hub (“IOH”), which will facilitate delivery of the interrupt using that affinity information.
摘要:
A link layer system is provided. The link layer system a first link layer control module and a retry queue for storing a transmitted data packet. The retry control module is coupled to the first link layer control module, which directs the retry queue to discard the transmitted data packet when an acknowledgment bit is received by the first link layer control module.
摘要:
In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet.
摘要:
In general, in one aspect, the disclosure describes a method of maintaining network protocol timers in data structures associated with different respective processors in a multi-processor system. The timers accessed by a respective one of the processors include timers of connections mapped to the processor.
摘要:
Technologies for network interface controllers (NICs) include a computing device having a NIC coupled to a root FPGA via an I/O link. The root FPGA is further coupled to multiple worker FPGAs by a serial link with each worker FPGA. The NIC may receive a remote direct memory access (RDMA) message from a remote host and send the RDMA message to the root FPGA via the I/O link. The root FPGA determines a target FPGA based on a memory address of the RDMA message. Each FPGA is associated with a part of a unified address space. If the target FPGA is a worker FPGA, the root FPGA sends the RDMA message to the worker FPGA via the corresponding serial link, and the worker FPGA processes the RDMA message. If the root FPGA is the target, the root FPGA may process the RDMA message. Other embodiments are described and claimed.