摘要:
Aspects of data re-direction are described, which can include software-defined networking (SDN) data re-direction operations. Some aspects include data re-direction operations performed by one or more virtualized network functions. In some aspects, a network router decodes an indication of a handover of a user equipment (UE) from a first end point (EP) to a second EP, based on the indication, the router can update a relocation table including the UE identifier, an identifier of the first EP, and an identifier of the second EP. The router can receive a data packet for the UE, configured for transmission to the first EP, and modify the data packet, based on the relocation table, for rerouting to the second EP. In some aspects, the router can decode handover prediction information, including an indication of a predicted future geographic location of the UE, and update the relocation table based on the handover prediction information.
摘要:
Generally, this disclosure provides systems, methods and computer readable media for management of sockets and device queues for reduced latency packet processing. The method may include maintaining a unique-list comprising entries identifying device queues and an associated unique socket for each of the device queues, the unique socket selected from a plurality of sockets configured to receive packets; busy-polling the device queues on the unique-list; receiving a packet from one of the plurality of sockets; and updating the unique-list in response to detecting that the received packet was provided by an interrupt processing module. The updating may include identifying a device queue associated with the received packet; identifying a socket associated with the received packet; and if the identified device queue is not on one of the entries on the unique-list, creating a new entry on the unique-list, the new entry comprising the identified device queue and the identified socket.
摘要:
Methods and apparatus for implementing flow control with reduced buffer usage for network devices. In response to detection of flow control events, transmission of a data unit or segment such as an Ethernet frame is preempted in favor of a flow control message, resulting in aborting transmission of the frame. Data corresponding to the entirety of the frame is buffered at the transmitting station until the frame has been transmitted (or after a delay), enabling retransmission of the aborted frame. Preemption of frames in favor of flow control messages results in earlier responses to flow control events, enabling the size of buffers to be reduced.
摘要:
Generally, this disclosure provides devices, methods and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.
摘要:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
摘要:
Certain aspects of a method and system for configuring a plurality of network interfaces that share a physical interface (PHY) may include a system comprising one or more physical network interface controllers (NICs) and two or more virtual NICs. One or more drivers associated with each of the virtual NICs that share one or more Ethernet ports associated with the physical NICs may be synchronized based on controlling one or more parameters associated with one or more Ethernet ports. One or more wake on LAN (WoL) patterns associated with each of the drivers may be detected at one or more Ethernet ports. A wake up signal may be communicated to one or more drivers associated with the detected WoL patterns. One of the drivers may be appointed to be a port master driver. If a failure of the appointed port master driver is detected, another driver may be appointed to be the port master driver.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with per flow estimation of far end transmit window are disclosed. Aspects of a method may include storing at a network interface card (NIC) processor state information for a received TCP segment and state information for transmitted TCP segments for a determined network flow without transferring state information for the received TCP segment to a host system communicatively coupled to the NIC. The generation of a new TCP segment comprising the collected received TCP segments may be controlled based on the occurrence of a termination event and a transmit window size. The period of time for aggregation of received TCP segments may be calculated based on the sequence numbers of the next expected TCP segment and the next received acknowledgement packet.
摘要:
Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with best effort direct placement of incoming traffic are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) processor without transferring state information to a host processor every time a TCP segment is received. When an event occurs that terminates the collection of TCP segments, the NIC processor may generate a new aggregated TCP segment based on the collected TCP segments. If a placement sequence number corresponding to the generated new TCP segment for the particular network flow is received before the TCP segment is received, the generated new TCP segment may be transferred directly from the memory to the user buffer instead of transferring the data to a kernel buffer, which would require further copy by the host stack from kernel buffer to user buffer.
摘要:
Methods and apparatus for implementing notification by network elements of packet drops. In response to determining a packet is to be dropped, a network element such as a switch or router determines the source of the packet and returns a dropped packet notification message to the source. Upon receipt of notification, networking software or embedded hardware on the source causes the dropped packet to be retransmitted. The notification may also be sent from the network element to the destination computer to inform networking software or embedded logic implemented by the destination computer that the packet was dropped and notification to the source has been sent, thus alleviating the destination from needing to send a Selective ACKnowledge (SACK) message to inform the source the packet was not delivered. (Too narrow)
摘要:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.