摘要:
Aspects of a system for transporting information via a communications system may include a processor that enables establishing, from a local remote direct memory access (RDMA) enabled network interface card (RNIC), one or more communication channels, based on the transmission control protocol (TCP), between the local RNIC and at least one remote RNIC via at least one network. The processor may enable establishing at least one RDMA connection between one of a plurality of local RDMA endpoints and at least one remote RDMA endpoint utilizing the one or more communication channels. The processor may further enable communicating messages via the established RDMA connections between one of the plurality of local RDMA endpoints and at least one remote RDMA endpoint independent of whether the messages are in-sequence or out-of-sequence.
摘要:
Certain aspects of a method and system for quality of service and congestion management for converged network interface devices are disclosed. Aspects of a method may include processing at least one of: input/output (I/O) requests and network packets in a converged network interface card (CNIC) based on a class associated with each of the I/O requests and network packets by storing on the CNIC, information that identifies the I/O requests and network packets, without storing on the I/O requests and network packets on the CNIC
摘要:
Certain aspects of a method and system for host memory alignment may include splitting a received read and/or write I/O request at a first of a plurality of memory cache line boundaries to generate a first portion of the received I/O request. A second portion of the received read and/or write I/O request may be split into a plurality of segments so that each of the plurality of segments is aligned with one or more of the plurality of memory cache line boundaries. A cost of memory bandwidth for accessing host memory may be minimized based on the splitting of the second portion of the received read and/or write I/O request.
摘要:
Aspects of a high reliability system for transporting information across a network via a TCP tunnel are presented. The TCP tunnel may include a plurality of TCP connections that may be logically associated with a single TCP tunnel. At least a portion of the plurality of TCP connections may be associated with each of a plurality of different network interfaces. In a fault tolerant system, at least a current portion of a plurality of messages communicated via an RDMA connection may be transported by a current TCP connection associated with a current network interface located at a current RNIC. In the event of a subsequent failure in the current TCP connection a subsequent portion of the plurality of messages may be communicated via a subsequent TCP connection associated with a different network interface. The different network interface may be located at the current RNIC or at a subsequent RNIC.
摘要:
Aspects of a system for transporting information via a communications system may include a processor that establishes, via a local NIC, a communication channel between the local NIC and a remote NIC via a network. The processor may receive a datagram message from one of a plurality of local endpoints, communicatively coupled to the local NIC, without a dedicated connection. A datagram message may be delivered to one of a plurality of remote endpoints communicatively coupled to a remote NIC. The processor may communicate a datagram message via the local NIC to one of a plurality of remote endpoints via a communication channel without establishing a dedicated connection between one of the plurality of local endpoints and one of the plurality of remote endpoints.
摘要:
Methods and systems for a plurality of physical layers for network connection may include coupling a MAC to one of a plurality of PHYs. The coupling to a specific PHY may be based on auto-detection of network activity, or network devices, via the PHYs. Also, one of the PHYs may be coupled to the MAC as a power-up default. The PHYs may be coupled to a same network, by, for example, cables. A first cable to a first PHY may couple it to a first network switch and a second cable to a second PHY may couple it to a second network switch. The first network switch may be rated to handle, for example, a greater data rate than the second network switch. The first cable may not be able to be used as a cable for the second PHY, and vice versa.
摘要:
Certain aspects of a method and system for protocol offload in paravirtualized systems may be disclosed. Exemplary aspects of the method may include preposting of application buffers to a front-end driver rather than to a NIC in a paravirtualized system. The NIC may be enabled to place the received offloaded data packets into a received data buffer corresponding to a particular GOS. A back-end driver may be enabled to acknowledge the placed offloaded data packets. The back-end driver may be enabled to forward the received data buffer corresponding to the particular GOS to the front-end driver. The front-end driver may be enabled to copy offloaded data packets from a received data buffer corresponding to a particular guest operating system (GOS) to the preposted application buffers.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with per flow estimation of far end transmit window are disclosed. Aspects of a method may include storing at a network interface card (NIC) processor state information for a received TCP segment and state information for transmitted TCP segments for a determined network flow without transferring state information for the received TCP segment to a host system communicatively coupled to the NIC. The generation of a new TCP segment comprising the collected received TCP segments may be controlled based on the occurrence of a termination event and a transmit window size. The period of time for aggregation of received TCP segments may be calculated based on the sequence numbers of the next expected TCP segment and the next received acknowledgement packet.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with best effort direct placement of incoming traffic are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) processor without transferring state information to a host processor every time a TCP segment is received. When an event occurs that terminates the collection of TCP segments, the NIC processor may generate a new aggregated TCP segment based on the collected TCP segments. If a placement sequence number corresponding to the generated new TCP segment for the particular network flow is received before the TCP segment is received, the generated new TCP segment may be transferred directly from the memory to the user buffer instead of transferring the data to a kernel buffer, which would require further copy by the host stack from kernel buffer to user buffer.
摘要:
Methods and systems for a plurality of physical layers for network connection may include coupling a MAC to one of a plurality of PHYs. The coupling to a specific PHY may be based on auto-detection of network activity, or network devices, via the PHYs. Also, one of the PHYs may be coupled to the MAC as a power-up default. The PHYs may be coupled to a same network, by, for example, cables. A first cable to a first PHY may couple it to a first network switch and a second cable to a second PHY may couple it to a second network switch. The first network switch may be rated to handle, for example, a greater data rate than the second network switch. The first cable may not be able to be used as a cable for the second PHY, and vice versa.