Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

    公开(公告)号:US10673648B1

    公开(公告)日:2020-06-02

    申请号:US16358514

    申请日:2019-03-19

    Abstract: A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

    Executing a selected sequence of instructions depending on packet type in an exact-match flow switch

    公开(公告)号:US10230638B2

    公开(公告)日:2019-03-12

    申请号:US16042339

    申请日:2018-07-23

    Abstract: An integrated circuit includes a processor and an exact-match flow table structure. A first packet is received onto the integrated circuit. The packet is determined to be of a first type. As a result of this determination, execution by the processor of a first sequence of instructions is initiated. This execution causes bits of the first packet to be concatenated and modified in a first way, thereby generating a first Flow Id. The first Flow Id is an exact-match for the Flow Id of a first stored flow entry. A second packet is received. It is of a first type. As a result, a second sequence of instructions is executed. This causes bits of the second packet to be concatenated and modified in a second way, thereby generating a second Flow Id. The second Flow Id is an exact-match for the Flow Id of a second stored flow entry.

    Method of dynamically allocating buffers for packet data received onto a networking device

    公开(公告)号:US10069767B1

    公开(公告)日:2018-09-04

    申请号:US14928493

    申请日:2015-10-30

    Abstract: A method of dynamically allocating buffers involves receiving a packet onto an ingress circuit. The ingress circuit includes a memory that stores a free buffer list, and an allocated buffer list. Packet data of the packet is stored into a buffer. The buffer is associated with a buffer identification (ID). The buffer ID is moved from the free buffer list to the allocated buffer list once the packet data is stored in the buffer. The buffer ID is used to read the packet data from the buffer and into an egress circuit and is stored in a de-allocation buffer list in the egress circuit. A send buffer IDs command is received from a processor onto the egress circuit and instructs the egress circuit to send the buffer ID to the ingress circuit such that the buffer ID is pushed onto the free buffer list.

    Ordering system that employs chained ticket release bitmap block functions

    公开(公告)号:US10032119B1

    公开(公告)日:2018-07-24

    申请号:US14579458

    申请日:2014-12-22

    Abstract: An ordering system receives release requests to release packets, where each packet has an associated sequence number, but the system only releases packets sequentially in accordance with the sequence numbers. The system includes a Ticket Order Release Command Dispatcher And Sequence Number Translator (TORCDSNT) and a plurality of Ticket Order Release Bitmap Blocks (TORBBs). The TORBBs are stored in one or more transactional memories. In response to receiving release requests, the TORCDSNT issues atomic ticket release commands to the transactional memory or memories, and uses the multiple TORBBs in a chained manner to implement a larger overall ticket release bitmap than could otherwise be supported by any one of the TORBBs individually. Special use of one flag bit position in each TORBB facilitates this chaining. In one example, the system is implemented in a network flow processor so that the TORBBs are maintained in transactional memories spread across the chip.

    Configurable mesh data bus in an island-based network flow processor

    公开(公告)号:US10031878B2

    公开(公告)日:2018-07-24

    申请号:US15463857

    申请日:2017-03-20

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. A configurable mesh data bus includes a command mesh, a pull-id mesh, and two data meshes. The configurable mesh data bus extends through all the islands. For each mesh, each island includes a centrally located crossbar switch and eight half links. Two half links extend to ports on the top edge of the island, a half link extends to a port on a right edge of the island, two half links extend to ports on the bottom edge of the island, and a half link extents to a port on the left edge of the island. Two additional links extend to functional circuitry of the island. The configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit.

    Split packet transmission DMA engine

    公开(公告)号:US09990307B1

    公开(公告)日:2018-06-05

    申请号:US14527642

    申请日:2014-10-29

    CPC classification number: G06F12/1081 G06F13/30 H04L12/40071

    Abstract: Packet information is stored in split fashion such that a first part is stored in a first device and a second part is stored in a second device. A split packet transmission DMA engine receives an egress packet descriptor. The descriptor does not indicate where the second part is stored but contains information about the first part. Using this information, the DMA engine causes a part of the first part to be transferred from the first device to the DMA engine. Address information in the first part indicates where the second part is stored. The DMA engine uses the address information to cause the second part to be transferred from the second device to the DMA engine. When both the part of the first part and the second part are stored in the DMA engine, then the entire packet is transferred in ordered fashion to an egress device.

    Recursive lookup with a hardware trie structure that has no sequential logic elements

    公开(公告)号:US09899996B1

    公开(公告)日:2018-02-20

    申请号:US14556135

    申请日:2014-11-29

    CPC classification number: H03K17/00 G06F9/467 G06F13/40 H04L45/745 H04L45/748

    Abstract: A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.

    Making a flow ID for an exact-match flow table using a programmable reduce table circuit

    公开(公告)号:US09819585B1

    公开(公告)日:2017-11-14

    申请号:US14726428

    申请日:2015-05-29

    CPC classification number: H04L45/745 H04L49/00 H04L69/22

    Abstract: An exact-match flow table structure stores flow entries. Each flow entry includes a Flow Id. A flow entry is generated from an incoming packet. The flow table structure determines whether there is a stored flow entry, the Flow Id of which is an exact-match for the generated Flow Id. In one novel aspect, a programmable reduce table circuit is used to generate a Flow Id. A selected subset of bits of an incoming packet is supplied as an address to an SRAM, so that the SRAM outputs a data value. The data value is supplied to a programmable lookup circuit such that the lookup circuit performs a selected type of lookup operation, and outputs a result value of a reduced number of bits. A multiplexer circuit is used to form a Flow Id such that the result value is a part of the Flow Id.

    Crossbar and an egress packet modifier in an exact-match flow switch

    公开(公告)号:US09807006B1

    公开(公告)日:2017-10-31

    申请号:US14726433

    申请日:2015-05-29

    CPC classification number: H04L45/745 H04L5/0044 H04L47/10 H04L49/101 H04L69/22

    Abstract: An integrated circuit includes an exact-match flow table structure, a crossbar switch, and an egress packet modifier. Each flow entry includes an egress action value, an egress flow number, and an egress port number. A Flow Id is generated from an incoming packet. The Flow Id is used to obtain a matching flow entry. A portion of the packet is communicated across the crossbar switch to the egress packet modifier, along with the egress action value and flow number. The egress action value is used to obtain non-flow specific header information stored in a first egress memory. The egress flow number is used to obtain flow specific header information stored in a second egress memory. The egress packet modifier adds the header information onto the portion of the packet, thereby generating a complete packet. The complete packet is then output from an egress port indicated by the egress port number.

    Maintaining bypass packet count values

    公开(公告)号:US09755910B1

    公开(公告)日:2017-09-05

    申请号:US14923457

    申请日:2015-10-27

    Abstract: A networking device includes a Network Interface Device (NID) and a host. Packets are received onto the networking device via the NID. Some of the packets pass along paths from the NID to the host, whereas others do not pass to the host and are processed by the NID. A bypass packet count for each path that passes from the NID to the host is maintained on the NID. It is determined, using a match table, that one of the packets received on the NID is to be sent to the host. The packet, however, is instead sent along a bypass path without going through the host (as it should have according to the host's match tables). The path that the packet would have traversed had the packet not been sent along the bypass path is determined and the bypass packet count associated with the determined path is incremented.

Patent Agency Ranking