PACKET SECURITY OVER MULTIPLE NETWORKS
    1.
    发明公开

    公开(公告)号:US20230155988A1

    公开(公告)日:2023-05-18

    申请号:US18099795

    申请日:2023-01-20

    CPC classification number: H04L63/0485 H04L63/0478 H04L63/0272

    Abstract: Examples described herein relate to a network interface device that includes an interface and circuitry. In some examples, the circuitry coupled to the interface is to apply encryption for packets received from a first network interface device and tunnel the encrypted packets to a second network interface device. In some examples, forwarding operations by the first network interface device and forwarding operations in the second network interface device are based on different header fields.

    PATH SELECTION FOR PACKET TRANSMISSION
    2.
    发明公开

    公开(公告)号:US20240195749A1

    公开(公告)日:2024-06-13

    申请号:US18424376

    申请日:2024-01-26

    CPC classification number: H04L47/628 H04L45/24 H04L49/3063

    Abstract: Examples described herein relate to a network interface device comprising a multi-stage programmable packet processing pipeline circuitry to determine a path to transmit a packet based on relative network traffic transmitted via multiple paths. In some examples, determine a path to transmit a packet is based on Deficit Round Robin (DRR). In some examples, the programmable packet processing pipeline circuitry includes: a first stage to manage two or more paths, wherein a path of the two or more paths of the first stage is associated with two or more child nodes, a second stage to manage two or more paths, wherein a path of the two or more paths of the second stage is associated with two or more child nodes, and at least one child node is associated with the determined path.

    CONGESTION MANAGEMENT TECHNIQUES
    5.
    发明申请

    公开(公告)号:US20200280518A1

    公开(公告)日:2020-09-03

    申请号:US16878466

    申请日:2020-05-19

    Abstract: Examples described herein relate to a network element comprising an ingress pipeline and at least one queue from which to egress packets. The network element can receive a packet and generate a congestion notification packet at the ingress pipeline to a sender of the packet based on detection of congestion in a target queue that is to store the packet and before the packet is stored in a congested target queue. The network element can generate a congestion notification packet based on a queue depth of the target queue and likelihood the target queue is congested. The likelihood the queue is congested can be based on a probabilistic function including one or more of Proportional-Integral (PI) or Random Early Detection (RED). The network element can determine a pause time for the sender to pause sending particular packets based at least on a time for the target queue to drain to a target level.

    PATH SELECTION FOR PACKET TRANSMISSION
    6.
    发明公开

    公开(公告)号:US20240080276A1

    公开(公告)日:2024-03-07

    申请号:US18503851

    申请日:2023-11-07

    CPC classification number: H04L47/628 H04L45/24 H04L49/3063

    Abstract: Examples described herein relate to a network interface device comprising a multi-stage programmable packet processing pipeline circuitry to determine a path to transmit a packet based on relative network traffic transmitted via multiple paths. In some examples, determine a path to transmit a packet is based on Deficit Round Robin (DRR). In some examples, the programmable packet processing pipeline circuitry includes: a first stage to manage two or more paths, wherein a path of the two or more paths of the first stage is associated with two or more child nodes, a second stage to manage two or more paths, wherein a path of the two or more paths of the second stage is associated with two or more child nodes, and at least one child node is associated with the determined path.

    PATH SELECTION FOR PACKET TRANSMISSION

    公开(公告)号:US20220109639A1

    公开(公告)日:2022-04-07

    申请号:US17550938

    申请日:2021-12-14

    Abstract: Examples described herein relate to a network interface device comprising a multi-stage programmable packet processing pipeline circuitry to determine a path to transmit a packet based on relative network traffic transmitted via multiple paths. In some examples, determine a path to transmit a packet is based on Deficit Round Robin (DRR). In some examples, the programmable packet processing pipeline circuitry includes: a first stage to manage two or more paths, wherein a path of the two or more paths of the first stage is associated with two or more child nodes, a second stage to manage two or more paths, wherein a path of the two or more paths of the second stage is associated with two or more child nodes, and at least one child node is associated with the determined path.

    PREDICTIVE QUEUE DEPTH
    8.
    发明申请

    公开(公告)号:US20210328930A1

    公开(公告)日:2021-10-21

    申请号:US17359533

    申请日:2021-06-26

    Abstract: Examples described herein relate to an apparatus that includes a network interface device comprising circuitry to identify at least one congested queue, predict occupancy level of the at least one congested queue when at least one sender is predicted to receive at least one congestion notification and transmit the at least one congestion notification to the at least one sender through zero or more intermediate nodes. In some examples, to identify at least one congested queue, the circuitry is to identify the at least one congested queue based on at least one fill level. In some examples, to identify at least one congested queue, the circuitry is to identify the at least one congested queue based on at least one predicted fill level at a predicted time the at least one sender receives the at least one congestion notification.

    RULE LOOKUP FOR PROCESSING PACKETS

    公开(公告)号:US20250030636A1

    公开(公告)日:2025-01-23

    申请号:US18900700

    申请日:2024-09-28

    Abstract: Examples described herein relate to configuring a device to perform longest prefix match (LPM) of rules associated with nodes to identify an action to perform on a packet. The rules can be stored among a memory and ternary content-addressable memory (TCAM) based on available memory capacity of the TCAM.

Patent Agency Ranking