Memory-efficient handling of multicast traffic

    公开(公告)号:US10015112B2

    公开(公告)日:2018-07-03

    申请号:US14961923

    申请日:2015-12-08

    CPC classification number: H04L49/201 H04L45/745 H04L49/205 H04L49/9005

    Abstract: Communication apparatus includes multiple interfaces connected to a packet data network. A memory is coupled to the interfaces and configured as a buffer to contain packets received through ingress interfaces while awaiting transmission to the network via respective egress interfaces. Packet processing logic is configured, upon receipt of a multicast packet through an ingress interface, to identify a number of the egress interfaces through which respective copies of the multicast packet are to be transmitted, to allocate a space in the buffer for storage of a single copy of the multicast packet, to replicate and transmit multiple copies of the stored copy of the multicast packet through the egress interfaces, to maintain a count of the replicated copies that have been transmitted, and when the count reaches the identified number, to release the allocated space in the buffer.

    Coherent capturing of shared-buffer status

    公开(公告)号:US11500737B2

    公开(公告)日:2022-11-15

    申请号:US16417669

    申请日:2019-05-21

    Abstract: A network element includes multiple ports configured to communicate over a network, a buffer memory, a snapshot memory, and circuitry. The circuitry is configured to forward packets between the ports, to temporarily store information associated with the packets in the buffer memory, to continuously write at least part of the information to the snapshot memory concurrently with storage of the information in the buffer memory, and, in response to at least one predefined diagnostic event, to stop writing of the information to the snapshot memory, so as to create in the snapshot memory a coherent snapshot corresponding to a time of the diagnostic event.

    Global policers
    3.
    发明申请

    公开(公告)号:US20210226895A1

    公开(公告)日:2021-07-22

    申请号:US16746879

    申请日:2020-01-19

    Abstract: Apparatus for global policing of a bandwidth of a flow, the apparatus including a network device including a local policer configured to perform bandwidth policing on the flow within the network device, and a communications module configured to: send local policer state information from the local policer to a remote global policer, and receive policer state information from the remote global policer and update the local policer state information based on the remote global policer state information, Related apparatus and methods are also provided.

    Network element with improved cache flushing

    公开(公告)号:US10938720B2

    公开(公告)日:2021-03-02

    申请号:US16420217

    申请日:2019-05-23

    Abstract: A network element includes multiple ports, a memory, multiple processors and cache-flushing circuitry. The multiple ports are configured to serve as ingress and egress ports for receiving and transmitting packets from and to a network. The memory is configured to store a forwarding table including rules that specify forwarding of the packets from the ingress ports to the egress ports. The multiple processors are configured to process the packets in accordance with the rules. The two or more cache memories are each configured to cache a respective copy of one or more of the rules, for use by the multiple processors. The cache-flushing circuitry is configured to trigger flushing operations of copies of rules in the cache memories in response to changes in the forwarding table, and to reduce a likelihood of simultaneous accesses to the forwarding table for updating multiple cache memories, by de-correlating or diluting the flushing operations.

    Network element with improved cache flushing

    公开(公告)号:US20200374230A1

    公开(公告)日:2020-11-26

    申请号:US16420217

    申请日:2019-05-23

    Abstract: A network element includes multiple ports, a memory, multiple processors and cache-flushing circuitry. The multiple ports are configured to serve as ingress and egress ports for receiving and transmitting packets from and to a network. The memory is configured to store a forwarding table including rules that specify forwarding of the packets from the ingress ports to the egress ports. The multiple processors are configured to process the packets in accordance with the rules. The two or more cache memories are each configured to cache a respective copy of one or more of the rules, for use by the multiple processors. The cache-flushing circuitry is configured to trigger flushing operations of copies of rules in the cache memories in response to changes in the forwarding table, and to reduce a likelihood of simultaneous accesses to the forwarding table for updating multiple cache memories, by de-correlating or diluting the flushing operations.

    Direct Memory Access (DMA) Engine for Diagnostic Data

    公开(公告)号:US20220224585A1

    公开(公告)日:2022-07-14

    申请号:US17145341

    申请日:2021-01-10

    Abstract: A network-connected device includes at least one communication port, packet processing circuitry and Diagnostics Direct Memory Access (DMA) Circuitry (DDC). The at least one communication port is configured to communicate packets over a network. The packet processing circuitry is configured to receive, buffer, process and transmit the packets. The DDC is configured to receive a definition of (i) one or more diagnostic events, and (ii) for each diagnostic event, a corresponding list of diagnostic data that is generated in the packet processing circuitry and that pertains to the diagnostic event, and, responsively to occurrence of a diagnostic event, to gather the corresponding list of diagnostic data from the packet processing circuitry.

    Efficient memory utilization and egress queue fairness

    公开(公告)号:US11171884B2

    公开(公告)日:2021-11-09

    申请号:US16351684

    申请日:2019-03-13

    Abstract: In one embodiment, a network device includes multiple ports to be connected to a packet data network so as to serve as both ingress and egress ports in receiving and forwarding of data packets including unicast and multicast data packets, a memory coupled to the ports and to contain a combined unicast-multicast user-pool storing the received unicast and multicast data packets, and packet processing logic to compute a combined unicast-multicast user-pool free-space based on counting only once at least some of the multicast packets stored once in the combined unicast-multicast user-pool, compute an occupancy of an egress queue by counting a space used by the data packets of the egress queue in the combined unicast-multicast user-pool, apply an admission policy to a received data packet for entry into the egress queue based on at least the computed occupancy of the egress queue and the computed combined unicast-multicast user-pool free-space.

    Direct memory access (DMA) engine for diagnostic data

    公开(公告)号:US11637739B2

    公开(公告)日:2023-04-25

    申请号:US17145341

    申请日:2021-01-10

    Abstract: A network-connected device includes at least one communication port, packet processing circuitry and Diagnostics Direct Memory Access (DMA) Circuitry (DDC). The at least one communication port is configured to communicate packets over a network. The packet processing circuitry is configured to receive, buffer, process and transmit the packets. The DDC is configured to receive a definition of (i) one or more diagnostic events, and (ii) for each diagnostic event, a corresponding list of diagnostic data that is generated in the packet processing circuitry and that pertains to the diagnostic event, and, responsively to occurrence of a diagnostic event, to gather the corresponding list of diagnostic data from the packet processing circuitry.

Patent Agency Ranking