Generating a flow ID by passing packet data serially through two CCT circuits

    公开(公告)号:US09641436B1

    公开(公告)日:2017-05-02

    申请号:US14726441

    申请日:2015-05-29

    CPC classification number: H04L45/745 H04L45/38 H04L47/2441 H04L69/22

    Abstract: An integrated circuit includes an input port, a first Characterize/Classify/Table Lookup and Multiplexer Circuit (CCTC), a second CCTC, and an exact-match flow table structure. The first and second CCTCs are structurally identical. The first and second CCTs are coupled together serially. In one example, an incoming packet is received onto the integrated circuit via the input port and packet information is supplied to a first characterizer of the first CCTC. Information flow passes through the classifier of the first CCT, through the Table Lookup and Multiplexer Circuit (TLMC) of the first CCT, through the characterizer of the second CCT, through the classifier of the second CCT, and out of the TLMC of the second CCT in the form of a Flow Id. The Flow Id is supplied to the exact-match flow table structure to determine whether an exact-match for the Flow Id is found in the flow table structure.

    PPI allocation request and response for accessing a memory system
    32.
    发明授权
    PPI allocation request and response for accessing a memory system 有权
    用于访问存储系统的PPI分配请求和响应

    公开(公告)号:US09559988B2

    公开(公告)日:2017-01-31

    申请号:US14464692

    申请日:2014-08-20

    CPC classification number: H04L49/3072 H04L45/742 H04L49/9042

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine in a PPI allocation request, and is allocated a PPI by the packet engine in a PPI allocation response, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 PDRSD在PPI分配请求中从分组引擎请求PPI,并且在PPI分配响应中由分组引擎分配PPI,然后标记要用PPI写入的分组部分,并发送分组部分和PPI 到包引擎。

    PPI de-allocate CPP bus command
    33.
    发明授权
    PPI de-allocate CPP bus command 有权
    PPI取消分配CPP总线命令

    公开(公告)号:US09548947B2

    公开(公告)日:2017-01-17

    申请号:US14464700

    申请日:2014-08-20

    CPC classification number: H04L49/3018 H04L47/624 H04L49/252

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI addressing mode in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine, and is allocated a PPI by the packet engine, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine. Once the packet portion has been processed, a PPI de-allocation command causes the packet engine to de-allocate the PPI so that the PPI is available for allocating in association with another packet portion.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 不是将PDRSD管理分组部分存储到存储器中,而是提供分组引擎。 PDRSD使用PPI寻址模式与分组引擎进行通信,并指示分组引擎存储分组部分。 PDRSD从分组引擎请求PPI,并由分组引擎分配PPI,然后用PPI标记要写入的分组部分,并将分组部分和PPI发送到分组引擎。 一旦分组部分被处理,PPI解除分配命令使分组引擎去分配PPI,使得PPI可用于与另一分组部分相关联地分配。

    Multi-processor system having tripwire data merging and collision detection
    34.
    发明授权
    Multi-processor system having tripwire data merging and collision detection 有权
    具有Tripwire数据合并和碰撞检测的多处理器系统

    公开(公告)号:US09495158B2

    公开(公告)日:2016-11-15

    申请号:US14311217

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: An integrated circuit includes a pool of processors and a Tripwire Data Merging and Collision Detection Circuit (TDMCDC). Each processor has a special tripwire bus port. Execution of a novel tripwire instruction causes the processor to output a tripwire value onto its tripwire bus port. Each respective tripwire bus port is coupled to a corresponding respective one of a plurality of tripwire bus inputs of the TDMCDC. The TDMCDC receives tripwire values from the processors and communicates them onto a consolidated tripwire bus. From the consolidated bus the values are communicated out of the integrated circuit and to a debug station. If more than one processor outputs a valid tripwire value at a given time, then the TDMCDC asserts a collision bit signal that is communicated along with the tripwire value. Receiving tripwire values onto the debug station facilitates use of the debug station in monitoring and debugging processor code.

    Abstract translation: 集成电路包括处理器池和Tripwire数据合并与冲突检测电路(TDMCDC)。 每个处理器都有一个特殊的tripwire总线端口。 执行新的tripwire指令使处理器将绊线值输出到其绊线总线端口上。 每个相应的绊销总线端口耦合到TDMCDC的多个绊线总线输入中的对应的相应的一个。 TDMCDC从处理器接收绊线值,并将其传送到综合的绊线总线上。 从整合的总线,将值从集成电路传送到调试台。 如果多个处理器在给定时间输出有效的绊线值,则TDMCDC断言与绊线值一起传送的冲突位信号。 将tripwire值接收到调试台便于使用调试台监视和调试处理器代码。

    In-Flight Packet Processing
    35.
    发明申请
    In-Flight Packet Processing 有权
    飞行包处理

    公开(公告)号:US20160124772A1

    公开(公告)日:2016-05-05

    申请号:US14530599

    申请日:2014-10-31

    Abstract: A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.

    Abstract translation: 提供了一种支持飞行包内处理的方法。 分组处理设备(微启动)可以在分组进入之前向分组引擎发送分组处理请求。该请求提供了双重优点。 首先,微引擎将自己添加到工作队列中以请求处理。 一旦分组变得可用,报头部分被自动提供给相应的微引擎用于分组处理。 为了使微启动程序开始分组处理,仅涉及一个总线事务。 第二,微引擎可以在将整个数据包写入存储器之前处理数据包。 这对于大尺寸数据包特别有用,因为在由微引擎处理时,数据包不必完全写入存储器。

    Transactional memory that supports a put with low priority ring command

    公开(公告)号:US09280297B1

    公开(公告)日:2016-03-08

    申请号:US14631804

    申请日:2015-02-25

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    PACKET ENGINE THAT USES PPI ADDRESSING
    37.
    发明申请
    PACKET ENGINE THAT USES PPI ADDRESSING 有权
    使用PPI寻址的PACKET发动机

    公开(公告)号:US20160057069A1

    公开(公告)日:2016-02-25

    申请号:US14464690

    申请日:2014-08-20

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. The packet engine uses linear memory addressing to write the packet portions into the memory, and to read the packet portions from the memory.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 分组引擎使用线性存储器寻址将分组部分写入存储器,并从存储器读取分组部分。

    CPP BUS TRANSACTION VALUE HAVING A PAM/LAM SELECTION CODE FIELD
    38.
    发明申请
    CPP BUS TRANSACTION VALUE HAVING A PAM/LAM SELECTION CODE FIELD 有权
    具有PAM / LAM选择代码字段的CPP总线交易值

    公开(公告)号:US20160057058A1

    公开(公告)日:2016-02-25

    申请号:US14464697

    申请日:2014-08-20

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. A device interacting with the packet engine can use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. Alternatively, the device can use a Linear Addressing Mode (LAM) to communicate with the packet engine. A PAM/LAM selection code field in a bus transaction value sent to the packet engine indicates whether PAM or LAM will be used.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 与分组引擎交互的设备可以使用PPI(分组部分标识符)寻址模式(PAM)与分组引擎进行通信,并指示分组引擎存储分组部分。 或者,设备可以使用线性寻址模式(LAM)与分组引擎进行通信。 发送到分组引擎的总线事务值中的PAM / LAM选择代码字段指示是否使用PAM或LAM。

    MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION
    39.
    发明申请
    MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION 有权
    具有TRIPWIRE数据合并和冲突检测的多处理器系统

    公开(公告)号:US20150370563A1

    公开(公告)日:2015-12-24

    申请号:US14311217

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: An integrated circuit includes a pool of processors and a Tripwire Data Merging and Collision Detection Circuit (TDMCDC). Each processor has a special tripwire bus port. Execution of a novel tripwire instruction causes the processor to output a tripwire value onto its tripwire bus port. Each respective tripwire bus port is coupled to a corresponding respective one of a plurality of tripwire bus inputs of the TDMCDC. The TDMCDC receives tripwire values from the processors and communicates them onto a consolidated tripwire bus. From the consolidated bus the values are communicated out of the integrated circuit and to a debug station. If more than one processor outputs a valid tripwire value at a given time, then the TDMCDC asserts a collision bit signal that is communicated along with the tripwire value. Receiving tripwire values onto the debug station facilitates use of the debug station in monitoring and debugging processor code.

    Abstract translation: 集成电路包括处理器池和Tripwire数据合并与冲突检测电路(TDMCDC)。 每个处理器都有一个特殊的tripwire总线端口。 执行新的tripwire指令使处理器将绊线值输出到其绊线总线端口上。 每个相应的绊销总线端口耦合到TDMCDC的多个绊线总线输入中的对应的相应的一个。 TDMCDC从处理器接收绊线值,并将其传送到综合的绊线总线上。 从整合的总线,将值从集成电路传送到调试台。 如果多个处理器在给定时间输出有效的绊线值,则TDMCDC断言与绊线值一起传送的冲突位信号。 将tripwire值接收到调试台便于使用调试台监视和调试处理器代码。

    EFFICIENT CONDITIONAL INSTRUCTION HAVING COMPANION LOAD PREDICATE BITS INSTRUCTION
    40.
    发明申请
    EFFICIENT CONDITIONAL INSTRUCTION HAVING COMPANION LOAD PREDICATE BITS INSTRUCTION 有权
    有效的条件指令具有公司负荷预测位置指令

    公开(公告)号:US20150370562A1

    公开(公告)日:2015-12-24

    申请号:US14311225

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor can decode three instructions in three consecutive clock cycles, and can also execute the instructions in three consecutive clock cycles. The first instruction causes the ALU to generate a value which is then loaded due to execution of the first instruction into a register of a register file. The second instruction accesses the register and loads the value into predicate bits in a register file read stage. The predicate bits are loaded in the very next clock cycle following the clock cycle in which the second instruction was decoded. The third instruction is a conditional instruction that uses the values of the predicate bits as a predicate code to determine a predicate function. If a predicate condition (as determined by the predicate function as applied to flags) is true then an instruction operation of the third instruction is carried out, otherwise it is not carried out.

    Abstract translation: 流水线运行到完成处理器可以在三个连续的时钟周期中解码三条指令,并且还可以在三个连续的时钟周期内执行指令。 第一条指令使得ALU产生一个值,该值由于执行第一条指令而被加载到寄存器文件的寄存器中。 第二条指令访问寄存器,并将值加载到寄存器文件读取阶段的谓词位。 谓词位在第二条指令被解码的时钟周期之后的下一个时钟周期中被加载。 第三条指令是使用谓词位的值作为谓词代码来确定谓词函数的条件指令。 如果谓词条件(由标记的谓词函数确定)为真,则执行第三条指令的指令操作,否则不执行。

Patent Agency Ranking