Abstract:
A system including a network interface layer, and a physical network connection configured to connect with a networking medium, wherein the network interface layer is configured to: A) receive a user datagram protocol (UDP) message for sending, the UDP message having a length L, and a desired maximum network message size (MSS), B) segment the UDP message in accordance with the MSS into a plurality of message segments, each message segment having a size no greater than MSS, and adjust information in each of the plurality of message segments, and C) send the plurality of message segments via the physical network connection to a networking medium. Related apparatus and methods are also provided.
Abstract:
Serialization and deserialization of an object are performed by transmitting metadata and addresses of data members in a byte stream through a data network, receiving the byte stream from the data network, defining a container for the object, obtaining the addresses of the data members in the first memory from the input byte stream, applying direct memory access (DMA) or remote direct memory access (RDMA) to read the data members using the obtained addresses, and writing the data members into the container to create a new instance of the object.
Abstract:
A network adapter includes one or more network ports, multiple bus interfaces, and a processor. The one or more network ports are configured to communicate with a communication network. The multiple bus interfaces are configured to communicate with multiple respective Central Processing Units (CPUs) that belong to a multi-CPU device. The processor is configured to support an Option-ROM functionality, in which the network adapter holds Option-ROM program instructions that are loadable and executable by the multi-CPU device during a boot process, and, in response to a request from the multi-CPU device to report the support of the Option-ROM functionality, to report the support of the Option-ROM functionality over only a single bus interface, selected from among the multiple bus interfaces connecting the network adapter to the multi-CPU device.
Abstract:
A processor is configured to receive, from a client, a first message indicating a request to establish a connection between the client and a server, to ascertain that the first message does not include any cookie satisfying one or more criteria, to send, to the client, a second message that includes a first cookie, without allocating an endpoint on the server for the connection, in response to ascertaining that the first message does not include any cookie satisfying the criteria, to receive subsequently, from the client, a third message, to ascertain that the third message includes a second cookie, and that the second cookie satisfies the criteria, to allocate the endpoint for the connection in response to ascertaining that the second cookie satisfies the criteria, and to send, to the client, a fourth message indicating that the server is ready to receive data communication at the allocated endpoint.
Abstract:
A network node includes one or more network adapters and a bonding driver. The one or more network adapters are configured to communicate respective data flows over a communication network by applying a transport layer protocol that saves communication state information in a state of a respective network adapter. The bonding driver is configured to exchange traffic including the data flows of an application program that is executed in the network node, to communicate the data flows of the traffic via one or more physical links of the one or more network adapters, and, in response to a physical-transport failure, to switch a given data flow to a different physical link or a different network path, transparently to the application program.
Abstract:
Peripheral apparatus for use with a host computer includes an add-on device, which includes a first network port coupled to one end of a packet communication link and add-on logic, which is configured to receive and transmit packets containing data over the packet communication link and to perform computational operations on the data. A network interface controller (NIC) includes a host bus interface, configured for connection to the host bus of the host computer and a second network port, coupled to the other end of the packet communication link. Packet processing logic in the NIC is coupled between the host bus interface and the second network port, and is configured to translate between the packets transmitted and received over the packet communication link and transactions executed on the host bus so as to provide access between the add-on device and the resources of the host computer.
Abstract:
In a data network congestion control in a virtualized environment is enforced in packet flows to and from virtual machines in a host. A hypervisor and network interface hardware in the host are trusted components. Enforcement comprises estimating congestion states in the data network attributable to respective packet flows, recognizing a new packet that belongs to one of the data packet flows, and using one or more of the trusted components and to make a determination based on the congestion states that the new packet belongs to a congestion-producing packet flow. A congestion-control policy is applied by one or more of the trusted components to the new packet responsively to the determination.
Abstract:
Data processing apparatus includes a host processor and a network interface controller (NIC), which is configured to couple the host processor to a packet data network. A memory holds a flow state table containing context information with respect to computational operations to be performed on multiple packet flows conveyed between the host processor and the network. Acceleration logic is coupled to perform the computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.
Abstract:
A method in a network element that includes multiple interfaces for connecting to a communication network includes receiving from the communication network via an ingress interface a flow including a sequence of packets, and routing the packets to a destination of the flow via a first egress interface. A permission indication for re-routing the flow is received in the ingress interface. In response to receiving the permission indication, subsequent packets of the flow are re-routed via a second egress interface that is different from the first egress interface. Further re-routing of the flow is refrained from, until receiving another permission indication.
Abstract:
A method for data transfer includes receiving in an input/output (I/O) operation data to be written to a specified virtual address in a host memory. Upon receiving the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. Responsively to detecting that the first page is swapped out, the received data are written to a second, free page in the host memory, and the specified virtual address is remapped to the free page.