In-Flight Packet Processing
    51.
    发明申请
    In-Flight Packet Processing 有权
    飞行包处理

    公开(公告)号:US20160124772A1

    公开(公告)日:2016-05-05

    申请号:US14530599

    申请日:2014-10-31

    Abstract: A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.

    Abstract translation: 提供了一种支持飞行包内处理的方法。 分组处理设备(微启动)可以在分组进入之前向分组引擎发送分组处理请求。该请求提供了双重优点。 首先,微引擎将自己添加到工作队列中以请求处理。 一旦分组变得可用,报头部分被自动提供给相应的微引擎用于分组处理。 为了使微启动程序开始分组处理,仅涉及一个总线事务。 第二,微引擎可以在将整个数据包写入存储器之前处理数据包。 这对于大尺寸数据包特别有用,因为在由微引擎处理时,数据包不必完全写入存储器。

    Efficient complex network traffic management in a non-uniform memory system

    公开(公告)号:US09304706B2

    公开(公告)日:2016-04-05

    申请号:US14631784

    申请日:2015-02-25

    Abstract: A network appliance includes a first processor, a second processor, a first storage device, and a second storage device. A first status information is stored in the first storage device. The first processor is coupled to the first storage device. A queue of data is stored in the second storage device. The first status information indicates if traffic data stored in the queue of data is permitted to be transmitted. The second processor is coupled to the second storage device. The first processor communicates with the second processor. The traffic data includes packet information. The first storage device is a high speed memory only accessible to the first processor. The second storage device is a high capacity memory accessible to multiple processors. The first status information is a permitted bit that indicates if the traffic data within the queue of data is permitted to be transmitted.

    Transactional memory that supports a put with low priority ring command

    公开(公告)号:US09280297B1

    公开(公告)日:2016-03-08

    申请号:US14631804

    申请日:2015-02-25

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    PACKET ENGINE THAT USES PPI ADDRESSING
    54.
    发明申请
    PACKET ENGINE THAT USES PPI ADDRESSING 有权
    使用PPI寻址的PACKET发动机

    公开(公告)号:US20160057069A1

    公开(公告)日:2016-02-25

    申请号:US14464690

    申请日:2014-08-20

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. The packet engine uses linear memory addressing to write the packet portions into the memory, and to read the packet portions from the memory.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 分组引擎使用线性存储器寻址将分组部分写入存储器,并从存储器读取分组部分。

    CPP BUS TRANSACTION VALUE HAVING A PAM/LAM SELECTION CODE FIELD
    55.
    发明申请
    CPP BUS TRANSACTION VALUE HAVING A PAM/LAM SELECTION CODE FIELD 有权
    具有PAM / LAM选择代码字段的CPP总线交易值

    公开(公告)号:US20160057058A1

    公开(公告)日:2016-02-25

    申请号:US14464697

    申请日:2014-08-20

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. A device interacting with the packet engine can use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. Alternatively, the device can use a Linear Addressing Mode (LAM) to communicate with the packet engine. A PAM/LAM selection code field in a bus transaction value sent to the packet engine indicates whether PAM or LAM will be used.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 与分组引擎交互的设备可以使用PPI(分组部分标识符)寻址模式(PAM)与分组引擎进行通信,并指示分组引擎存储分组部分。 或者,设备可以使用线性寻址模式(LAM)与分组引擎进行通信。 发送到分组引擎的总线事务值中的PAM / LAM选择代码字段指示是否使用PAM或LAM。

    Merging PCP flows as they are assigned to a single virtual channel
    56.
    发明授权
    Merging PCP flows as they are assigned to a single virtual channel 有权
    合并PCP流,因为它们被分配到单个虚拟通道

    公开(公告)号:US09264256B2

    公开(公告)日:2016-02-16

    申请号:US14321732

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/4625 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a first plurality of physical MAC ports, one or more PCP (Priority Code Point) flows. The NFP also maintains, for each of a second plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the port enqueue engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each port enqueue engine has a lookup table circuit that is configurable to cause multiple PCP flows to be merged so that the frame data for the multiple flows is all assigned to the same one virtual channel. Due to the PCP flow merging, the second number can be smaller than the first number multiplied by eight.

    Abstract translation: 网络流处理器(NFP)集成电路经由第一多个物理MAC端口中的每一个接收一个或多个PCP(优先权码点)流。 对于第二多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,端口入队引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个端口入队引擎具有可配置为使多个PCP流合并的查找表电路,使得多个流的帧数据都被分配给相同的一个虚拟通道。 由于PCP流合并,第二个数字可以小于第一个数乘以8。

    ISLAND-BASED NETWORK FLOW PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING
    57.
    发明申请
    ISLAND-BASED NETWORK FLOW PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING 有权
    基于岛屿的网络流程处理器,具有有效的搜索关键处理

    公开(公告)号:US20160011995A1

    公开(公告)日:2016-01-14

    申请号:US14326381

    申请日:2014-07-08

    Inventor: Rick Bouley

    CPC classification number: G06F13/28 G06F12/1081 G06F13/1663 G06F13/4221

    Abstract: A Island-Based Network Flow Processor (IBNFP) includes a memory and a processor located on a first island, a Direct Memory Access (DMA) controller located on a second island, and an Interlaken Look-Aside (ILA) interface circuit and an interface circuit located on a third island. A search key data set including multiple search keys is stored in the memory. A descriptor is generated by the processor and is sent to the DMA controller, which generates a search key data request, receives the search key data set, and selects a single search key. The ILA interface circuit receives the search key, generates and ILA packet including the search key that is sent to an external transactional memory device that generates a result data value. The DMA controller receives the result data value via the ILA interface circuit, writes the result data value to the memory, and sends a DMA completion notification.

    Abstract translation: 基于岛屿的网络流处理器(IBNFP)包括位于第一岛上的存储器和处理器,位于第二岛上的直接存储器访问(DMA)控制器,以及Interlaken Look-Aside(ILA)接口电路和接口 电路位于第三个岛上。 包括多个搜索键的搜索关键数据集存储在存储器中。 描述符由处理器生成,并被发送到DMA控制器,其产生搜索关键字数据请求,接收搜索关键字数据集,并选择单个搜索关键字。 ILA接口电路接收搜索关键字,生成包含发送到产生结果数据值的外部事务存储器件的搜索关键字的ILA分组。 DMA控制器通过ILA接口电路接收结果数据值,将结果数据值写入存储器,并发送DMA完成通知。

    INVERSE PCP FLOW REMAPPING FOR PFC PAUSE FRAME GENERATION
    58.
    发明申请
    INVERSE PCP FLOW REMAPPING FOR PFC PAUSE FRAME GENERATION 有权
    用于PFC暂停框架生成的反向PCP流动替代

    公开(公告)号:US20160006677A1

    公开(公告)日:2016-01-07

    申请号:US14321762

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L49/3045 H04L49/9005 H04L49/9042 Y02P80/112

    Abstract: An overflow threshold value is stored for each of a plurality of virtual channels. A link manager maintains, for each virtual channel, a buffer count. If the buffer count for a virtual channel is detected to exceed the overflow threshold value for a virtual channel whose originating PCP flows were merged, then a PFC (Priority Flow Control) pause frame is generated where multiple ones of the priority class enable bits are set to indicate that multiple PCP flows should be paused. For the particular virtual channel that is overloaded, an Inverse PCP Remap LUT (IPRLUT) circuit performs inverse PCP mapping, including merging and/or reordering mapping, and outputs an indication of each of those PCP flows that is associated with the overloaded virtual channel. Associated physical MAC port circuitry uses this information to generate the PFC pause frame so that the appropriate multiple enable bits are set in the pause frame.

    Abstract translation: 为多个虚拟通道中的每一个存储溢出阈值。 链路管理器为每个虚拟通道维护缓冲区计数。 如果检测到虚拟通道的缓冲器计数超过其始发PCP流合并的虚拟通道的溢出阈值,则生成PFC(优先级流控制)暂停帧,其中设置了多个优先级使能位 以指示应暂停多个PCP流。 对于重载的特定虚拟信道,反PCP重映射LUT(IPRLUT)电路执行反PCP映射,包括合并和/或重新排序映射,并且输出与重载虚拟信道相关联的每个PCP流的指示。 相关的物理MAC端口电路使用该信息来生成PFC暂停帧,使得在暂停帧中设置适当的多个使能位。

    REORDERING PCP FLOWS AS THEY ARE ASSIGNED TO VIRTUAL CHANNELS
    59.
    发明申请
    REORDERING PCP FLOWS AS THEY ARE ASSIGNED TO VIRTUAL CHANNELS 有权
    随着PCP的流动,他们被分配到虚拟通道

    公开(公告)号:US20160006580A1

    公开(公告)日:2016-01-07

    申请号:US14321744

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/467 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a plurality of physical MAC ports, PCP (Priority Code Point) flows. The NFP also maintains, for each of a plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each engine has a lookup table circuit that is configurable so that the relative priorities of the PCP flows are reordered as the PCP flows are assigned to virtual channels. A PCP flow with a higher PCP value can be assigned to a lower priority virtual channel, whereas a PCP flow with a lower PCP value can be assigned to a higher priority virtual channel.

    Abstract translation: 网络流处理器(NFP)集成电路通过多个物理MAC端口中的每一个接收PCP(优先代码点)流。 对于多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个引擎具有可配置的查找表电路,使得当PCP流被分配给虚拟通道时,PCP流的相对优先级被重新排序。 具有较高PCP值的PCP流可以被分配给较低优先级的虚拟信道,而具有较低PCP值的PCP流可以被分配给较高优先级的虚拟信道。

    MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION
    60.
    发明申请
    MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION 有权
    具有TRIPWIRE数据合并和冲突检测的多处理器系统

    公开(公告)号:US20150370563A1

    公开(公告)日:2015-12-24

    申请号:US14311217

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: An integrated circuit includes a pool of processors and a Tripwire Data Merging and Collision Detection Circuit (TDMCDC). Each processor has a special tripwire bus port. Execution of a novel tripwire instruction causes the processor to output a tripwire value onto its tripwire bus port. Each respective tripwire bus port is coupled to a corresponding respective one of a plurality of tripwire bus inputs of the TDMCDC. The TDMCDC receives tripwire values from the processors and communicates them onto a consolidated tripwire bus. From the consolidated bus the values are communicated out of the integrated circuit and to a debug station. If more than one processor outputs a valid tripwire value at a given time, then the TDMCDC asserts a collision bit signal that is communicated along with the tripwire value. Receiving tripwire values onto the debug station facilitates use of the debug station in monitoring and debugging processor code.

    Abstract translation: 集成电路包括处理器池和Tripwire数据合并与冲突检测电路(TDMCDC)。 每个处理器都有一个特殊的tripwire总线端口。 执行新的tripwire指令使处理器将绊线值输出到其绊线总线端口上。 每个相应的绊销总线端口耦合到TDMCDC的多个绊线总线输入中的对应的相应的一个。 TDMCDC从处理器接收绊线值,并将其传送到综合的绊线总线上。 从整合的总线,将值从集成电路传送到调试台。 如果多个处理器在给定时间输出有效的绊线值,则TDMCDC断言与绊线值一起传送的冲突位信号。 将tripwire值接收到调试台便于使用调试台监视和调试处理器代码。

Patent Agency Ranking