摘要:
A network interface device for use with a host computer that includes a host processor and a memory, and which is configured to concurrently run a master operating system and at least one virtual operating system. The device includes a bus interface that communicates over a bus with the host processor and the memory, and a network interface, which is coupled to send and receive data packets carrying data over a packet network. A protocol processor is coupled between the bus interface and the network interface so as to convey the data between the network interface and the memory while performing protocol processing on the data packets under instructions from the at least one virtual operating system, while bypassing the master operating system.
摘要:
A network interface device includes a bus interface that communicates over a bus with a host processor and memory, and a network interface, including at least first and second physical ports, which are coupled to send and receive data packets carrying data over a packet network. A protocol processor includes a single transmit processing pipeline and a single receive processing pipeline, which are coupled between the bus interface and the network interface so as to convey the data between both of the first and second physical ports of the network interface and the memory via the bus interface while performing protocol offload processing on the data packets.
摘要:
A network interface device includes a bus interface that communicates over a bus with a host processor and memory, and a network interface that sends and receive data packets carrying data over a packet network. A protocol processor conveys the data between the network interface and the memory via the bus interface while performing protocol offload processing on the data packets in accordance with multiple different application flows. The bus interface queues the data for transmission over the bus in a plurality of queues that are respectively assigned to the different application flows, and transmits the data over the bus according to the queues.
摘要:
A method for communication includes inputting from a host processor to a network interface device a sequence of work requests indicative of operations to be carried out by the network interface device with respect to a plurality of the connections. The device looks ahead through the sequence in order to identify at least first and second operations that are to be carried out with respect to one of the connections in response to first and second work requests, respectively, wherein the second work request does not immediately follow the first work request in the sequence. The device loads the context data for the one of the connections from a host memory into a context cache, and performs at least the first and second operations sequentially while the context data are held in the cache.
摘要:
A network interface device includes a bus interface that communicates over a bus with a host processor and memory, and a network interface that sends and receive data packets carrying data over a packet network. A protocol processor conveys the data between the network interface and the memory via the bus interface while performing protocol offload processing on the data packets in accordance with multiple different application flows. The bus interface queues the data for transmission over the bus in a plurality of queues that are respectively assigned to the different application flows, and transmits the data over the bus according to the queues.
摘要:
A network interface device includes a bus interface that communicates over a bus with a host processor and memory, and a network interface that sends and receive data packets carrying data over a packet network. A protocol processor conveys the data between the network interface and the memory via the bus interface while performing protocol offload processing on the data packets in accordance with multiple different application flows. The bus interface queues the data for transmission over the bus in a plurality of queues that are respectively assigned to the different application flows, and transmits the data over the bus according to the queues.
摘要:
Methods and systems for direct device access are disclosed. Aspects of one method may include a plurality of GOSs directly accessing a first network interface device, where the first network interface device may provide access to a network. One or more of the GOSs may be migrated to directly access a second network interface device, based on state information for each of the GOSs, where the state information may be maintained by the host. The GOSs may communicate data to a device coupled to the network by direct accessing the first and/or second network interface device. Similarly, the first and/or second network interface device may communicate data received from a device coupled to the network to one or more of the plurality of GOSs via direct access of the first and/or second network interface device.
摘要:
Methods and systems for direct device access are disclosed. Aspects of one method may include a plurality of GOSs directly accessing a first network interface device, where the first network interface device may provide access to a network. One or more of the GOSs may be migrated to directly access a second network interface device, based on state information for each of the GOSs, where the state information may be maintained by the host. The GOSs may communicate data to a device coupled to the network by direct accessing the first and/or second network interface device. Similarly, the first and/or second network interface device may communicate data received from a device coupled to the network to one or more of the plurality of GOSs via direct access of the first and/or second network interface device.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with per flow estimation of far end transmit window are disclosed. Aspects of a method may include storing at a network interface card (NIC) processor state information for a received TCP segment and state information for transmitted TCP segments for a determined network flow without transferring state information for the received TCP segment to a host system communicatively coupled to the NIC. The generation of a new TCP segment comprising the collected received TCP segments may be controlled based on the occurrence of a termination event and a transmit window size. The period of time for aggregation of received TCP segments may be calculated based on the sequence numbers of the next expected TCP segment and the next received acknowledgement packet.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with best effort direct placement of incoming traffic are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) processor without transferring state information to a host processor every time a TCP segment is received. When an event occurs that terminates the collection of TCP segments, the NIC processor may generate a new aggregated TCP segment based on the collected TCP segments. If a placement sequence number corresponding to the generated new TCP segment for the particular network flow is received before the TCP segment is received, the generated new TCP segment may be transferred directly from the memory to the user buffer instead of transferring the data to a kernel buffer, which would require further copy by the host stack from kernel buffer to user buffer.