Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

    公开(公告)号:US10673648B1

    公开(公告)日:2020-06-02

    申请号:US16358514

    申请日:2019-03-19

    Abstract: A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

    Inter-packet interval prediction learning algorithm

    公开(公告)号:US09900090B1

    公开(公告)日:2018-02-20

    申请号:US14690362

    申请日:2015-04-17

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Communicating a neural network feature vector (NNFV) to a host and receiving back a set of weight values for a neural network

    公开(公告)号:US12223418B1

    公开(公告)日:2025-02-11

    申请号:US14841722

    申请日:2015-09-01

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto a first network device. The first device includes a neural network. The first network device generates a neural network feature vector (NNFV) based on the received packets. The first network device then sends the NNFV to a second network device. The second device uses the NNFV to determine a set of weight values. The weight values are then sent back to the first network device. The first device loads the weight values into the neural network. The neural network, as configured by the weight values, then analyzes each of a plurality of flows received onto the first device to determine whether the flow likely has a particular characteristic.

    Packet prediction in a multi-protocol label switching network using operation, administration, and maintenance (OAM) messaging

    公开(公告)号:US10250528B2

    公开(公告)日:2019-04-02

    申请号:US14264003

    申请日:2014-04-28

    Abstract: A first switch in a MPLS network receives a plurality of packets that are part of a pair of flows. The first switch performs a packet prediction learning algorithm on the first plurality of packets and generates packet prediction information that is communicated to a second switch within the MPLS network utilizing an Operations, Administration, and Maintenance (OAM) packet (message). In a first example, the first switch communicates a packet prediction information notification to a Network Operations Center (NOC) that in response communicates a packet prediction control signal to the second switch. In a second example, the first switch does not communicate a packet prediction information notification. In the first example, the second switch utilizes the packet prediction control signal to determine if the packet prediction information is to be utilized. In the second example, second switch independently determines if the packet prediction information is to be used.

    Loading a flow table with neural network determined information

    公开(公告)号:US09929933B1

    公开(公告)日:2018-03-27

    申请号:US14841717

    申请日:2015-09-01

    CPC classification number: H04L45/02 H04B10/27 H04L43/026 H04L43/0876 H04L43/16

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto an integrated circuit located in one of the network devices. The integrated circuit includes a neural network and a flow table. The neural network analyzes the group of packets and in response determines if it is likely that the flow has a particular characteristic. The neural network outputs a neural network output value that indicates if it is likely that the flow has a particular characteristic. The neural network output value, or a value derived from it, is included in a flow entry in the flow table on the integrated circuit. Packets of the flow subsequently received onto the integrated circuit are routed or otherwise processed according to the flow entry associated with the flow.

    Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

    公开(公告)号:US10917348B1

    公开(公告)日:2021-02-09

    申请号:US16358351

    申请日:2019-03-19

    Abstract: A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

    Using a neural network to determine how to direct a flow

    公开(公告)号:US10129135B1

    公开(公告)日:2018-11-13

    申请号:US14841719

    申请日:2015-09-01

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto an integrated circuit located in a first network device. The integrated circuit includes a neural network. The neural network analyzes the group of packets and in response outputs a neural network output value. The neural network output value is used to determine how the packets of the flow are to be output from a second network device. In one example, each packet of the flow output by the first network device is output along with a tag. The tag is indicative of the neural network output value. The second device uses the tag to determine which output port located on the second device is to be used to output each of the packets.

    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM
    9.
    发明申请
    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM 有权
    分组间隔预测学习算法

    公开(公告)号:US20140133320A1

    公开(公告)日:2014-05-15

    申请号:US13675620

    申请日:2012-11-13

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Abstract translation: 设备接收作为流对的一部分的数据包,每个数据包共享一个应用协议。 设备通过对数据包执行深度数据包检测(DPI)来确定数据包的应用协议。 数据包大小被测量并转换成数据包大小状态。 分组大小状态,分组序列号和分组流方向用于创建应用协议估计表(APET)。 在正常操作期间使用APET来估计流对的应用协议,而不执行耗时的DPI。 然后,设备确定接收到的分组之间的分组间间隔。 分组间间隔被转换成分组间间隔状态。 分组间间隔状态和分组序列号用于创建分组间间隔预测表。 然后,设备为每个应用协议存储分组间间隔预测表。 在操作期间使用分组间间隔预测表来预测分组之间的分组间间隔。

Patent Agency Ranking