Low-latency processing of multicast packets

    公开(公告)号:US11218415B2

    公开(公告)日:2022-01-04

    申请号:US16194345

    申请日:2018-11-18

    Abstract: A network element includes multiple ports and forwarding circuitry. The ports are configured to serve as network interfaces for exchanging packets with a communication network. The forwarding circuitry is configured to receive a multicast packet that is to be forwarded via a plurality of the ports over a plurality of paths through the communication network to a plurality of destinations, to identify a path having a highest latency among the multiple paths over which the multicast packet is to be forwarded, to forward the multicast packet to one or more of the paths other than the identified path, using a normal scheduling process having a first forwarding latency, and to forward the multicast packet to at least the identified path, using an accelerated scheduling process having a second forwarding latency, smaller than the first forwarding latency.

    Collective communication system and methods

    公开(公告)号:US11196586B2

    公开(公告)日:2021-12-07

    申请号:US16789458

    申请日:2020-02-13

    Abstract: A method in which a plurality of process are configured to hold a block of data destined for other processes, with data repacking circuitry including receiving circuitry configured to receive at least one block of data from a source process of the plurality of processes, the repacking circuitry configured to repack received data in accordance with at least one destination process of the plurality of processes, and sending circuitry configured to send the repacked data to the at least one destination process of the plurality of processes, receiving a set of data for all-to-all data exchange, the set of data being configured as a matrix, the matrix being distributed among the plurality of processes, and transposing the data by each of the plurality of processes sending matrix data from the process to the repacking circuitry, and the repacking circuitry receiving, repacking, and sending the resulting matrix data to destination processes.

    Network element supporting flexible data reduction operations

    公开(公告)号:US20210234753A1

    公开(公告)日:2021-07-29

    申请号:US16750019

    申请日:2020-01-23

    Abstract: A network element includes a plurality of ports, multiple computational modules, configurable forwarding circuitry and a central block. The ports include child ports coupled to child network elements or network nodes and parent ports coupled to parent network elements. The computational modules collectively perform a data reduction operation of a data reduction protocol. The forwarding circuitry interconnects among ports and computational modules. The central block receives a request indicative of child ports, a parent port, and computational modules required for performing reduction operations on data received via the child ports, for producing reduced data destined to the parent port, to derive from the request a topology that interconnects among the child ports, parent port and computational modules for performing the data reduction operations and to forward the reduced data for transmission to the selected parent port, and to configure the forwarding circuitry to apply the topology.

    Low-Latency Processing of Multicast Packets
    24.
    发明申请

    公开(公告)号:US20200162397A1

    公开(公告)日:2020-05-21

    申请号:US16194345

    申请日:2018-11-18

    Abstract: A network element includes multiple ports and forwarding circuitry. The ports are configured to serve as network interfaces for exchanging packets with a communication network. The forwarding circuitry is configured to receive a multicast packet that is to be forwarded via a plurality of the ports over a plurality of paths through the communication network to a plurality of destinations, to identify a path having a highest latency among the multiple paths over which the multicast packet is to be forwarded, to forward the multicast packet to one or more of the paths other than the identified path, using a normal scheduling process having a first forwarding latency, and to forward the multicast packet to at least the identified path, using an accelerated scheduling process having a second forwarding latency, smaller than the first forwarding latency.

    NETWORK MONITORING USING SELECTIVE MIRRORING
    25.
    发明申请

    公开(公告)号:US20180091387A1

    公开(公告)日:2018-03-29

    申请号:US15276823

    申请日:2016-09-27

    Abstract: A network element includes multiple interfaces and circuitry. The interfaces are configured to connect to a communication system. The circuitry is configured to monitor a respective buffering parameter of data flows received via an ingress interface and queued while awaiting transmission via respective egress interfaces, to identify, based on the respective buffering parameter, at least one data flow for mirroring, to select one or more packets of the identified data flow for analysis by a network manager, and to send the selected packets to the network manager over the communication system via an egress interface.

    High performance computing system
    26.
    发明授权

    公开(公告)号:US11625393B2

    公开(公告)日:2023-04-11

    申请号:US16782118

    申请日:2020-02-05

    Abstract: A method including providing a SHARP tree including a plurality of data receiving processes and at least one aggregation node, designating a data movement command, providing a plurality of data input vectors to each of the plurality of data receiving processes, respectively, the plurality of data receiving processes each passing on the respective received data input vector to the at least one aggregation node, and the at least one aggregation node carrying out the data movement command on the received plurality of data input vectors. Related apparatus and methods are also provided.

    Head-of-queue blocking for multiple lossless queues

    公开(公告)号:US11470010B2

    公开(公告)日:2022-10-11

    申请号:US16783184

    申请日:2020-02-06

    Abstract: A network element includes at least one headroom buffer, and flow-control circuitry. The headroom buffer is configured for receiving and storing packets from a peer network element having at least two data sources, each headroom buffer serving multiple packets. The flow-control circuitry is configured to quantify a congestion severity measure, and, in response to detecting a congestion in the headroom buffer, to send to the peer network element pause-request signaling that instructs the peer network element to stop transmitting packets that (i) are associated with the congested headroom buffer and (ii) have priorities that are selected based on the congestion severity measure.

    Network element supporting flexible data reduction operations

    公开(公告)号:US11252027B2

    公开(公告)日:2022-02-15

    申请号:US16750019

    申请日:2020-01-23

    Abstract: A network element includes a plurality of ports, multiple computational modules, configurable forwarding circuitry and a central block. The ports include child ports coupled to child network elements or network nodes and parent ports coupled to parent network elements. The computational modules collectively perform a data reduction operation of a data reduction protocol. The forwarding circuitry interconnects among ports and computational modules. The central block receives a request indicative of child ports, a parent port, and computational modules required for performing reduction operations on data received via the child ports, for producing reduced data destined to the parent port, to derive from the request a topology that interconnects among the child ports, parent port and computational modules for performing the data reduction operations and to forward the reduced data for transmission to the selected parent port, and to configure the forwarding circuitry to apply the topology.

    Head-of-Queue Blocking for Multiple Lossless Queues

    公开(公告)号:US20210250300A1

    公开(公告)日:2021-08-12

    申请号:US16783184

    申请日:2020-02-06

    Abstract: A network element includes at least one headroom buffer, and flow-control circuitry. The headroom buffer is configured for receiving and storing packets from a peer network element having at least two data sources, each headroom buffer serving multiple packets. The flow-control circuitry is configured to quantify a congestion severity measure, and, in response to detecting a congestion in the headroom buffer, to send to the peer network element pause-request signaling that instructs the peer network element to stop transmitting packets that (i) are associated with the congested headroom buffer and (ii) have priorities that are selected based on the congestion severity measure.

    Hardware acceleration for uploading/downloading databases

    公开(公告)号:US10915479B1

    公开(公告)日:2021-02-09

    申请号:US16537576

    申请日:2019-08-11

    Abstract: A network element includes one or more ports for communicating over a network, a processor and packet processing hardware. The packet processing hardware is configured to transfer packets to and from the ports, and further includes data-transfer circuitry for data transfer with the processor. The processor and the data-transfer circuitry are configured to transfer between one another (i) one or more communication packets for transferal between the ports and the processor and (ii) one or more databases for transferal between the packet processing hardware and the processor, by (i) translating, by the processor, the transferal of both the communication packets and the databases into work elements, and posting the work elements on one or more work queues in a memory of the processor, and (ii) using the data-transfer circuitry, executing the work elements so as to transfer both the communication packets and the databases.

Patent Agency Ranking