Small Message Aggregation
    101.
    发明申请

    公开(公告)号:US20210218808A1

    公开(公告)日:2021-07-15

    申请号:US17147487

    申请日:2021-01-13

    Abstract: An apparatus includes one or more ports for connecting to a communication network, processing circuitry and a message aggregation circuit (MAC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MAC is configured to receive messages, which originate in one or more source processes and are destined to one or more destination processes, to aggregate two or more of the messages that share a common destination into an aggregated message, and to send the aggregated message using the processing circuitry over the communication network.

    Listing congestion notification packet generation by switch

    公开(公告)号:US11005770B2

    公开(公告)日:2021-05-11

    申请号:US16442508

    申请日:2019-06-16

    Abstract: Network communication is carried out by sending packets from a source network interface toward a destination network interface, receiving one of the packets in an intermediate switch of the network, determining that the intermediate switch is experiencing network congestion, generating in the intermediate switch a congestion notification packet for the received packet, and transmitting the congestion notification packet from the intermediate switch to the source network interface via the network. The received packet is forwarded from the intermediate switch toward the destination network interface. The source network interface may modify a rate of packet transmission responsively to the congestion notification packet.

    TCAM with multi region lookups and a single logical lookup

    公开(公告)号:US20210067448A1

    公开(公告)日:2021-03-04

    申请号:US16559658

    申请日:2019-09-04

    Abstract: A network element includes ports, a hardware fabric, a packet classifier and control logic. The ports are configured to transmit and receive packets over a network. The fabric is configured to forward the packets between the ports. The packet classifier is configured to receive at least some of the packets and to specify an action to be applied to a packet in accordance with a set of rules. The classifier includes (i) multiple Ternary Content Addressable Memories (TCAMs), each TCAM configured to match the packet to a respective subset of the set of rules and to output a match result, and (ii) circuitry configured to specify the action to be applied to the packet based on match results produced for the packet by the multiple TCAMs, and based on a priority defined among the multiple TCAMs. The control logic is configured to apply the specified action to the packet.

    Cut-through switching system
    105.
    发明申请

    公开(公告)号:US20200374241A1

    公开(公告)日:2020-11-26

    申请号:US16417672

    申请日:2019-05-21

    Abstract: A method including receiving at a buffer at least a portion of an incoming frame, holding in the buffer the at least a portion of the frame received at the buffer, keeping in the buffer the at least a portion of the frame held in the buffer after transmission of the incoming frame by transmission circuitry responsive to receiving a signal at the buffer indicating that the at least a portion of a frame held in the buffer should be kept, and clearing from the buffer the at least a portion of a frame held in the buffer responsive to receiving a signal to the buffer indicating that the at least a portion of the frame held in the buffer should be cleared. Related methods and apparatus are also described.

    Network devices
    106.
    发明申请
    Network devices 审中-公开

    公开(公告)号:US20200328987A1

    公开(公告)日:2020-10-15

    申请号:US16383711

    申请日:2019-04-15

    Abstract: Apparatus including a network element including an input-output port, the input-output port including an input data lane and an output data lane, wherein the input data lane is in wired connection with a network data source external to the network element, the output data lane is in wired connection with a network data destination external to the network element, and the network data source is distinct from the network data destination. Related apparatus and methods are also described.

    Efficient Memory Utilization and Egress Queue Fairness

    公开(公告)号:US20200296057A1

    公开(公告)日:2020-09-17

    申请号:US16351684

    申请日:2019-03-13

    Abstract: In one embodiment, a network device includes multiple ports to be connected to a packet data network so as to serve as both ingress and egress ports in receiving and forwarding of data packets including unicast and multicast data packets, a memory coupled to the ports and to contain a combined unicast-multicast user-pool storing the received unicast and multicast data packets, and packet processing logic to compute a combined unicast-multicast user-pool free-space based on counting only once at least some of the multicast packets stored once in the combined unicast-multicast user-pool, compute an occupancy of an egress queue by counting a space used by the data packets of the egress queue in the combined unicast-multicast user-pool, apply an admission policy to a received data packet for entry into the egress queue based on at least the computed occupancy of the egress queue and the computed combined unicast-multicast user-pool free-space.

    STREAM SYNCHRONIZATION
    108.
    发明申请

    公开(公告)号:US20200287644A1

    公开(公告)日:2020-09-10

    申请号:US16291003

    申请日:2019-03-04

    Abstract: A method including providing a network element including an ingress port, an egress port, and a delay equalizer, providing an equalization message generator, receiving, at the ingress port, a plurality of data packets from multiple sources, each data packet having a source indication and a source-provided time stamp, determining, at the ingress port, a received time stamp for at least some of the received data packets, passing the received data packets, the source-provided time stamps, and the received time stamps to the delay equalizer, the delay equalizer computing, for each source, a delay for synchronizing that source with other sources, the equalization message generator receiving an output, for each source, including the delay for that source, from the delay equalizer and producing a delay message instructing each source regarding the delay for that source, and sending, from the egress port, the delay message to each source. Related apparatus is also provided.

    Managing cache memory in a network element based on costs associated with fetching missing cache entries

    公开(公告)号:US10684960B2

    公开(公告)日:2020-06-16

    申请号:US15830021

    申请日:2017-12-04

    Abstract: A network element includes a data structure, a cache memory and circuitry. The data structure is configured to store multiple rules specifying processing of packets received from a communication network. The cache memory is configured to cache multiple rules including a subset of the rules stored in the data structure. Each rule that is cached in the cache memory has a respective cost value corresponding to a cost of retrieving the rule from the data structure. The circuitry is configured to receive one or more packets from the communication network, to process the received packets in accordance with one or more of the rules, by retrieving the rules from the cache memory when available, or from the data structure otherwise, to select a rule to be evicted from the cache memory, based on one or more respective cost values of the rules currently cached, and to evict the selected rule.

    Deduplication of mirror traffic in analyzer aggregation network

    公开(公告)号:US20200145315A1

    公开(公告)日:2020-05-07

    申请号:US16181395

    申请日:2018-11-06

    Abstract: A network switch includes multiple ports that serve as ingress ports and egress ports for connecting to a communication network, and processing circuitry. The processing circuitry is configured to receive packets via the ingress ports, select one or more of the packets for mirroring, create mirror copies of the selected packets and output the mirror copies for analysis, mark the packets for which mirror copies have been created with mirror-duplicate indications, and forward the packets to the egress ports, including the packets that are marked with the mirror-duplicate indications.

Patent Agency Ranking