Executing a selected sequence of instructions depending on packet type in an exact-match flow switch

    公开(公告)号:US10033638B1

    公开(公告)日:2018-07-24

    申请号:US14726438

    申请日:2015-05-29

    Abstract: An integrated circuit includes a processor and an exact-match flow table structure. A first packet is received onto the integrated circuit. The packet is determined to be of a first type. As a result of this determination, execution by the processor of a first sequence of instructions is initiated. This execution causes bits of the first packet to be concatenated and modified in a first way, thereby generating a first Flow Id. The first Flow Id is an exact-match for the Flow Id of a first stored flow entry. A second packet is received. It is of a first type. As a result, a second sequence of instructions is executed. This causes bits of the second packet to be concatenated and modified in a second way, thereby generating a second Flow Id. The second Flow Id is an exact-match for the Flow Id of a second stored flow entry.

    Chained CPP command
    43.
    发明授权

    公开(公告)号:US09846662B2

    公开(公告)日:2017-12-19

    申请号:US14492015

    申请日:2014-09-20

    CPC classification number: G06F13/28 G06F12/1081 G06F13/1642 G06F13/4027

    Abstract: A chained Command/Push/Pull (CPP) bus command is output by a first device and is sent from a CPP bus master interface across a set of command conductors of a CPP bus to a second device. The chained CPP command includes a reference value. The second device decodes the command, in response determines a plurality of CPP commands, and outputs the plurality of CPP commands onto the CPP bus. The second device detects when the plurality of CPP commands have been completed, and in response returns the reference value back to the CPP bus master interface of the first device via a set of data conductors of the CPP bus. The reference value indicates to the first device that an overall operation of the chained CPP command has been completed.

    Skip instruction to skip a number of instructions on a predicate

    公开(公告)号:US09830153B2

    公开(公告)日:2017-11-28

    申请号:US14311222

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/30069 G06F9/30072 G06F9/30145 G06F9/3802

    Abstract: A pipelined run-to-completion processor executes a conditional skip instruction. If a predicate condition as specified by a predicate code field of the skip instruction is true, then the skip instruction causes execution of a number of instructions following the skip instruction to be “skipped”. The number of instructions to be skipped is specified by a skip count field of the skip instruction. In some examples, the skip instruction includes a “flag don't touch” bit. If this bit is set, then neither the skip instruction nor any of the skipped instructions can change the values of the flags. Both the skip instruction and following instructions to be skipped are decoded one by one in sequence and pass through the processor pipeline, but the execution stage is prevented from carrying out the instruction operation of a following instruction if the predicate condition of the skip instruction was true.

    Simultaneous queue random early detection dropping and global random early detection dropping system

    公开(公告)号:US09705811B2

    公开(公告)日:2017-07-11

    申请号:US14507652

    申请日:2014-10-06

    CPC classification number: H04L47/326 H04L47/6275 H04L49/00

    Abstract: A method for receiving a packet descriptor associated with a packet and a queue number indicating a queue stored within a memory unit, determining a priority level of the packet and an amount of free memory available in the memory unit. Applying a global drop probability to generate a global drop indicator and applying a queue drop probability to generate a queue drop indicator. The global drop probability is a function of the amount of free memory. The queue drop probability is a function of instantaneous queue depth or drop precedence value. The packet is transmitted whenever the priority level is high. When the priority level is low, the packet is transmitted when both the global drop indicator and the queue drop indicator are a logic low value. When the priority level is low, the packet is not transmitted when either drop indicator is a logic low value.

    Generating a flow ID by passing packet data serially through two CCT circuits

    公开(公告)号:US09641436B1

    公开(公告)日:2017-05-02

    申请号:US14726441

    申请日:2015-05-29

    CPC classification number: H04L45/745 H04L45/38 H04L47/2441 H04L69/22

    Abstract: An integrated circuit includes an input port, a first Characterize/Classify/Table Lookup and Multiplexer Circuit (CCTC), a second CCTC, and an exact-match flow table structure. The first and second CCTCs are structurally identical. The first and second CCTs are coupled together serially. In one example, an incoming packet is received onto the integrated circuit via the input port and packet information is supplied to a first characterizer of the first CCTC. Information flow passes through the classifier of the first CCT, through the Table Lookup and Multiplexer Circuit (TLMC) of the first CCT, through the characterizer of the second CCT, through the classifier of the second CCT, and out of the TLMC of the second CCT in the form of a Flow Id. The Flow Id is supplied to the exact-match flow table structure to determine whether an exact-match for the Flow Id is found in the flow table structure.

    Global event chain in an island-based network flow processor

    公开(公告)号:US09626306B2

    公开(公告)日:2017-04-18

    申请号:US13399983

    申请日:2012-02-17

    CPC classification number: G06F13/00 G06F13/4022

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form one or more local event rings and a global event chain. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. Each local event ring involves event ring circuits and event ring segments. In one example, an event packet being communicated along a local event ring reaches an event ring circuit. The event ring circuit examines the event packet and determines whether it meets a programmable criterion. If the event packet meets the criterion, then the event packet is inserted into the global event chain. The global event chain communicates the event packet to a global event manager that logs events and maintains statistics and other information.

    PPI allocation request and response for accessing a memory system
    49.
    发明授权
    PPI allocation request and response for accessing a memory system 有权
    用于访问存储系统的PPI分配请求和响应

    公开(公告)号:US09559988B2

    公开(公告)日:2017-01-31

    申请号:US14464692

    申请日:2014-08-20

    CPC classification number: H04L49/3072 H04L45/742 H04L49/9042

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine in a PPI allocation request, and is allocated a PPI by the packet engine in a PPI allocation response, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 PDRSD在PPI分配请求中从分组引擎请求PPI,并且在PPI分配响应中由分组引擎分配PPI,然后标记要用PPI写入的分组部分,并发送分组部分和PPI 到包引擎。

    PPI de-allocate CPP bus command
    50.
    发明授权
    PPI de-allocate CPP bus command 有权
    PPI取消分配CPP总线命令

    公开(公告)号:US09548947B2

    公开(公告)日:2017-01-17

    申请号:US14464700

    申请日:2014-08-20

    CPC classification number: H04L49/3018 H04L47/624 H04L49/252

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI addressing mode in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine, and is allocated a PPI by the packet engine, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine. Once the packet portion has been processed, a PPI de-allocation command causes the packet engine to de-allocate the PPI so that the PPI is available for allocating in association with another packet portion.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 不是将PDRSD管理分组部分存储到存储器中,而是提供分组引擎。 PDRSD使用PPI寻址模式与分组引擎进行通信,并指示分组引擎存储分组部分。 PDRSD从分组引擎请求PPI,并由分组引擎分配PPI,然后用PPI标记要写入的分组部分,并将分组部分和PPI发送到分组引擎。 一旦分组部分被处理,PPI解除分配命令使分组引擎去分配PPI,使得PPI可用于与另一分组部分相关联地分配。

Patent Agency Ranking