Deadlock-free rerouting for resolving local link failures using detour paths

    公开(公告)号:US20220078104A1

    公开(公告)日:2022-03-10

    申请号:US17016464

    申请日:2020-09-10

    Abstract: A computing system including network elements arranged in at least one group. A plurality of the network elements are designated as spines and another plurality are designated as leaves, the spines and leaves are interconnected in a bipartite topology, and at least some of the spines and leaves are configured to: receive in a first leaf, from a source node, packets destined to a destination node via a second leaf, forward the packets via a first link to a first spine and to the second leaf via a second link, in response to detecting that the second link has failed, apply a detour path from the first leaf to the second leaf, including a detour link in a spine-to-leaf direction and another detour link a leaf-to-spine direction, and forward subsequent packets, which are received in the first leaf and are destined to the second leaf, via the detour path.

    Scalable pipeline for EVPN multi-homing

    公开(公告)号:US11102146B2

    公开(公告)日:2021-08-24

    申请号:US16706892

    申请日:2019-12-09

    Abstract: One embodiment includes a network device including multiple interfaces to serve as ingress ports for receiving network packets from nodes in remote customer-site network(s) via a tunnel in a provider network, and from nodes in a local customer-site network, and egress ports for forwarding at least some of the network packets, and control circuitry to make a decision to drop a network packet to reduce packet duplication in at least one of the nodes, responsively to the network packet being identified as a packet of broadcast, unknown unicast, or multicast traffic, the network packet being subject to decapsulation of an encapsulation header, being assigned to one of the egress ports, and having a header including one of a plurality of virtual local area network identifications, or one of a plurality of source identifications.

    TCAM with multi region lookups and a single logical lookup

    公开(公告)号:US10944675B1

    公开(公告)日:2021-03-09

    申请号:US16559658

    申请日:2019-09-04

    Abstract: A network element includes ports, a hardware fabric, a packet classifier and control logic. The ports are configured to transmit and receive packets over a network. The fabric is configured to forward the packets between the ports. The packet classifier is configured to receive at least some of the packets and to specify an action to be applied to a packet in accordance with a set of rules. The classifier includes (i) multiple Ternary Content Addressable Memories (TCAMs), each TCAM configured to match the packet to a respective subset of the set of rules and to output a match result, and (ii) circuitry configured to specify the action to be applied to the packet based on match results produced for the packet by the multiple TCAMs, and based on a priority defined among the multiple TCAMs. The control logic is configured to apply the specified action to the packet.

    Hardware acceleration for uploading/downloading databases

    公开(公告)号:US20210042251A1

    公开(公告)日:2021-02-11

    申请号:US16537576

    申请日:2019-08-11

    Abstract: A network element includes one or more ports for communicating over a network, a processor and packet processing hardware. The packet processing hardware is configured to transfer packets to and from the ports, and further includes data-transfer circuitry for data transfer with the processor. The processor and the data-transfer circuitry are configured to transfer between one another (i) one or more communication packets for transferal between the ports and the processor and (ii) one or more databases for transferal between the packet processing hardware and the processor, by (i) translating, by the processor, the transferal of both the communication packets and the databases into work elements, and posting the work elements on one or more work queues in a memory of the processor, and (ii) using the data-transfer circuitry, executing the work elements so as to transfer both the communication packets and the databases.

    Telemetry Event Aggregation
    15.
    发明申请

    公开(公告)号:US20210021503A1

    公开(公告)日:2021-01-21

    申请号:US16515060

    申请日:2019-07-18

    Abstract: In one embodiment a network device includes multiple interfaces including at least one egress interface, which is configured to transmit packets belonging to multiple flows to a packet data network, control circuitry configured to generate event-reporting data-items, each including flow and event-type information about a packet-related event occurring in the network device, a memory, and aggregation circuitry configured to aggregate data of at least some of the event-reporting data-items into aggregated-event-reporting data-items aggregated according to the flow and event-type information of the at least some event-reporting data-items, store the aggregated-event-reporting data-items in the memory, and forward one aggregated-event-reporting data-item of the aggregated-event-reporting data-items to a collector node, and purge the one aggregated-event-reporting dam-item from the memory.

    Transaction based scheduling
    16.
    发明申请

    公开(公告)号:US20210006513A1

    公开(公告)日:2021-01-07

    申请号:US16459651

    申请日:2019-07-02

    Abstract: One embodiment includes a communication apparatus, including multiple interfaces including at least one egress interface to transmit packets belonging to multiple flows to a network, and control circuitry to queue packets belonging to the flows in respective flow-specific queues for transmission via a given egress interface, and to arbitrate among the flow-specific queues so as to select packets for transmission responsively to dynamically changing priorities that are assigned such that all packets in a first flow-specific queue, which is assigned a highest priority among the queues, are transmitted through the given egress interface until the first flow-specific queue is empty, after which the control circuitry assigns the highest priority to a second flow-specific queue, such that all packets in the second flow-specific queue are transmitted through the given egress interface until the second flow-specific queue is empty, after which the control circuitry assigns the highest priority to another flow-specific queue.

    Unicast forwarding of adaptive-routing notifications

    公开(公告)号:US10819621B2

    公开(公告)日:2020-10-27

    申请号:US15050480

    申请日:2016-02-23

    Abstract: A method for communication includes, in a first network switch that is part of a communication network having a topology, detecting a compromised ability to forward a flow of packets originating from a source endpoint to a destination endpoint. In response to detecting the compromised ability, the first network switch identifies, based on the topology, a second network switch that lies on a current route of the flow, and also lies on one or more alternative routes from the source endpoint to the destination endpoint that do not traverse the first network switch. A notification, which is addressed individually to the second network switch and requests the second network switch to reroute the flow, is sent from the first network switch.

    Collective Communication System and Methods
    18.
    发明申请

    公开(公告)号:US20200274733A1

    公开(公告)日:2020-08-27

    申请号:US16789458

    申请日:2020-02-13

    Abstract: A method in which a plurality of process are configured to hold a block of data destined for other processes, with data repacking circuitry including receiving circuitry configured to receive at least one block of data from a source process of the plurality of processes, the repacking circuitry configured to repack received data in accordance with at least one destination process of the plurality of processes, and sending circuitry configured to send the repacked data to the at least one destination process of the plurality of processes, receiving a set of data for all-to-all data exchange, the set of data being configured as a matrix, the matrix being distributed among the plurality of processes, and transposing the data by each of the plurality of processes sending matrix data from the process to the repacking circuitry, and the repacking circuitry receiving, repacking, and sending the resulting matrix data to destination processes.

    Power-efficient activation of multi-lane ports in a network element

    公开(公告)号:US10412673B2

    公开(公告)日:2019-09-10

    申请号:US15607494

    申请日:2017-05-28

    Abstract: A network element includes circuitry and multiple ports. The ports are configured to transmit packets to a common destination via multiple paths of a communication network. Each port includes multiple serializers that serially transmit the packets over respective physical lanes. The power consumed by each port is a nonlinear function of the number of serializers activated in the port. The circuitry is configured to select one or more serializers among the ports to (i) meet a throughput demand via the ports and (ii) minimize an overall power consumed by the ports under a constraint of the nonlinear function, and to activate only the selected serializers. The circuitry is configured to choose for a packet received in the network element and destined to the common destination a port in which at least one of the serializers is activated, and to transmit the packet to the common destination via the chosen port.

    Efficient use of buffer space in a network switch

    公开(公告)号:US10387074B2

    公开(公告)日:2019-08-20

    申请号:US15161316

    申请日:2016-05-23

    Abstract: Communication apparatus includes multiple ports configured to serve as ingress ports and egress ports for connection to a packet data network. A memory is coupled to the ports and configured to contain both respective input buffers allocated to the ingress ports and a shared buffer holding data packets for transmission in multiple queues via the egress ports. Control logic is configured to monitor an overall occupancy level of the memory, and when a data packet is received through an ingress port having an input buffer that is fully occupied while the overall occupancy level of the memory is below a specified maximum, to allocate additional space in the memory to the input buffer and to accept the received data packet into the additional space.

Patent Agency Ranking