ADDRESSLESS MERGE COMMAND WITH DATA ITEM IDENTIFIER
    141.
    发明申请
    ADDRESSLESS MERGE COMMAND WITH DATA ITEM IDENTIFIER 审中-公开
    无数据统一命令与数据项标识符

    公开(公告)号:US20160085477A1

    公开(公告)日:2016-03-24

    申请号:US14492013

    申请日:2014-09-20

    Abstract: An addressless merge command includes an identifier of an item of data, and a reference value, but no address. A first part of the item is stored in a first place. A second part is stored in a second place. To move the first part so that the first and second parts are merged, the command is sent across a bus to a device. The device translates the identifier into a first address ADR1, and uses ADR1 to read the first part. Stored in or with the first part is a second address ADR2 indicating where the second part is stored. The device extracts ADR2, and uses ADR1 and ADR2 to issue bus commands. Each bus command causes a piece of the first part to be moved. When the entire first part has been moved, then device returns the reference value to indicate that the merge command has been completed.

    Abstract translation: 无地址合并命令包括数据项的标识符和参考值,但没有地址。 项目的第一部分存储在第一个位置。 第二部分存储在第二部分。 要移动第一部分以使第一部分和第二部分合并,命令通过总线发送到设备。 设备将标识符转换为第一地址ADR1,并使用ADR1读取第一部分。 存储在第一部分中或与第一部分一起存储是指示第二部分被存储在哪里的第二地址ADR2。 该设备提取ADR2,并使用ADR1和ADR2发出总线命令。 每个总线命令使第一部分的一部分被移动。 当整个第一部分被移动时,设备返回参考值以指示合并命令已经完成。

    PPI DE-ALLOCATE CPP BUS COMMAND
    142.
    发明申请
    PPI DE-ALLOCATE CPP BUS COMMAND 有权
    PPI去分配CPP总线命令

    公开(公告)号:US20160057081A1

    公开(公告)日:2016-02-25

    申请号:US14464700

    申请日:2014-08-20

    CPC classification number: H04L49/3018 H04L47/624 H04L49/252

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI addressing mode in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine, and is allocated a PPI by the packet engine, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine. Once the packet portion has been processed, a PPI de-allocation command causes the packet engine to de-allocate the PPI so that the PPI is available for allocating in association with another packet portion.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 不是将PDRSD管理分组部分存储到存储器中,而是提供分组引擎。 PDRSD使用PPI寻址模式与分组引擎进行通信,并指示分组引擎存储分组部分。 PDRSD从分组引擎请求PPI,并由分组引擎分配PPI,然后用PPI标记要写入的分组部分,并将分组部分和PPI发送到分组引擎。 一旦分组部分被处理,PPI解除分配命令使分组引擎去分配PPI,使得PPI可用于与另一分组部分相关联地分配。

    Reordering PCP flows as they are assigned to virtual channels
    143.
    发明授权
    Reordering PCP flows as they are assigned to virtual channels 有权
    将PCP流重新排序为虚拟通道

    公开(公告)号:US09270488B2

    公开(公告)日:2016-02-23

    申请号:US14321744

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/467 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a plurality of physical MAC ports, PCP (Priority Code Point) flows. The NFP also maintains, for each of a plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each engine has a lookup table circuit that is configurable so that the relative priorities of the PCP flows are reordered as the PCP flows are assigned to virtual channels. A PCP flow with a higher PCP value can be assigned to a lower priority virtual channel, whereas a PCP flow with a lower PCP value can be assigned to a higher priority virtual channel.

    Abstract translation: 网络流处理器(NFP)集成电路通过多个物理MAC端口中的每一个接收PCP(优先代码点)流。 对于多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个引擎具有可配置的查找表电路,使得当PCP流被分配给虚拟通道时,PCP流的相对优先级被重新排序。 具有较高PCP值的PCP流可以被分配给较低优先级的虚拟信道,而具有较低PCP值的PCP流可以被分配给较高优先级的虚拟信道。

    GENERATING A HASH USING S-BOX NONLINEARIZING OF A REMAINDER INPUT
    144.
    发明申请
    GENERATING A HASH USING S-BOX NONLINEARIZING OF A REMAINDER INPUT 有权
    使用S-BOX产生一个不需要输入的非线性的HASH

    公开(公告)号:US20160034257A1

    公开(公告)日:2016-02-04

    申请号:US14448980

    申请日:2014-07-31

    Inventor: Gavin J. Stark

    CPC classification number: H04L9/3239 G09C1/00 H04L9/0643 H04L2209/12

    Abstract: A processor includes a hash register and a hash generating circuit. The hash generating circuit includes a novel programmable nonlinearizing function circuit as well as a modulo-2 multiplier, a first modulo-2 summer, a modulor-2 divider, and a second modulo-2 summer. The nonlinearizing function circuit receives a hash value from the hash register and performs a programmable nonlinearizing function, thereby generating a modified version of the hash value. In one example, the nonlinearizing function circuit includes a plurality of separately enableable S-box circuits. The multiplier multiplies the input data by a programmable multiplier value, thereby generating a product value. The first summer sums a first portion of the product value with the modified hash value. The divider divides the resulting sum by a fixed divisor value, thereby generating a remainder value. The second summer sums the remainder value and the second portion of the input data, thereby generating a hash result.

    Abstract translation: 处理器包括散列寄存器和散列产生电路。 哈希发生电路包括一个新颖的可编程非线性函数电路以及模2乘法器,第一模2夏,模2分频器和第二模2夏。 非线性化函数电路从散列寄存器接收散列值,并执行可编程非线性函数,从而生成散列值的修改版本。 在一个示例中,非线性化功能电路包括多个可单独使能的S盒电路。 乘法器将输入数据乘以可编程乘数值,从而生成乘积值。 第一个夏季用修改的哈希值来计算产品值的第一部分。 分频器将结果总和除以固定除数值,从而生成余数值。 第二个夏天将剩余值和输入数据的第二部分相加,从而生成散列结果。

    MULTI-PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING
    145.
    发明申请
    MULTI-PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING 有权
    具有有效搜索关键处理的多处理器

    公开(公告)号:US20160011994A1

    公开(公告)日:2016-01-14

    申请号:US14326367

    申请日:2014-07-08

    Inventor: Rick Bouley

    CPC classification number: G06F13/1663 G06F13/28

    Abstract: A multi-processor includes a shared memory that stores a search key data set including multiple search keys, a processor, a Direct Memory Access (DMA) controller, and an Interlaken Look-Aside (ILA) interface circuit. The processor generates a descriptor that is sent to the DMA controller causing the DMA controller to read the search key data set. The DMA controller selects a single search key from the set and generates a lookup command message that is communicated to the ILA interface circuit. The ILA interface circuit generates an ILA packet that includes the single search key and sends the ILA packet to an external transactional memory device that generates a result data value. The result data value is communicated back to the DMA controller via the ILA interface circuit. The DMA controller stores the result data value in the shared memory and notifies the processor that the DMA process has completed.

    Abstract translation: 多处理器包括共享存储器,其存储包括多个搜索键的搜索关键字数据集,处理器,直接存储器访问(DMA)控制器和因特拉肯后视(ILA)接口电路)。 处理器产生一个发送到DMA控制器的描述符,使DMA控制器读取搜索关键字数据集。 DMA控制器从集合中选择单个搜索关键字,并生成传送到ILA接口电路的查找命令消息。 ILA接口电路生成包括单个搜索密钥的ILA分组,并将ILA分组发送到产生结果数据值的外部事务存储器设备。 结果数据值通过ILA接口电路传回DMA控制器。 DMA控制器将结果数据值存储在共享存储器中,并通知处理器DMA进程已完成。

    HIGH-SPEED DEQUEUING OF BUFFER IDS IN FRAME STORING SYSTEM
    146.
    发明申请
    HIGH-SPEED DEQUEUING OF BUFFER IDS IN FRAME STORING SYSTEM 有权
    缓冲区ID在帧存储系统中的高速分配

    公开(公告)号:US20160006665A1

    公开(公告)日:2016-01-07

    申请号:US14321756

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L47/622

    Abstract: Incoming frame data is stored in a plurality of dual linked lists of buffers in a pipelined memory. The dual linked lists of buffers are maintained by a link manager. The link manager maintains, for each dual linked list of buffers, a first head pointer, a second head pointer, a first tail pointer, a second tail pointer, a head pointer active bit, and a tail pointer active bit. The first head and tail pointers are used to maintain the first linked list of the dual linked list. The second head and tail pointers are used to maintain the second linked list of the dual linked list. Due to the pipelined nature of the memory, the dual linked list system can be popped to supply dequeued values at a sustained rate of more than one value per the read access latency time of the pipelined memory.

    Abstract translation: 输入帧数据被存储在流水线存储器中的多个缓冲器的双链表中。 缓冲区的双链表由链接管理器维护。 链路管理器针对缓冲器的每个双链表维护第一头指针,第二头指针,第一尾指针,第二尾指针,头指针活动位和尾指针有效位。 第一个头和尾指针用于维护双链表的第一个链表。 第二个头尾指针用于维护双链表的第二个链表。 由于存储器的流水线性质,可以弹出双链表系统以便以流水线存储器的读取访问等待时间为单位以多于一个值的持续速率提供出队值。

    SKIP INSTRUCTION TO SKIP A NUMBER OF INSTRUCTIONS ON A PREDICATE
    147.
    发明申请
    SKIP INSTRUCTION TO SKIP A NUMBER OF INSTRUCTIONS ON A PREDICATE 有权
    跳过指示跳过一些预测指示

    公开(公告)号:US20150370561A1

    公开(公告)日:2015-12-24

    申请号:US14311222

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/30069 G06F9/30072 G06F9/30145 G06F9/3802

    Abstract: A pipelined run-to-completion processor executes a conditional skip instruction. If a predicate condition as specified by a predicate code field of the skip instruction is true, then the skip instruction causes execution of a number of instructions following the skip instruction to be “skipped”. The number of instructions to be skipped is specified by a skip count field of the skip instruction. In some examples, the skip instruction includes a “flag don't touch” bit. If this bit is set, then neither the skip instruction nor any of the skipped instructions can change the values of the flags. Both the skip instruction and following instructions to be skipped are decoded one by one in sequence and pass through the processor pipeline, but the execution stage is prevented from carrying out the instruction operation of a following instruction if the predicate condition of the skip instruction was true.

    Abstract translation: 流水线运行到完成处理器执行条件跳转指令。 如果由跳过指令的谓词代码字段指定的谓词条件为真,则跳过指令导致跳过指令之后的多个指令的执行被“跳过”。 要跳过的指令的数量由跳过指令的跳过计数字段指定。 在一些示例中,跳过指令包括“标志不触摸”位。 如果该位置位,则跳过指令和任何跳过的指令都不能更改标志的值。 跳过指令和要跳过的以下指令均按顺序逐个解码,并通过处理器流水线,但如果跳过指令的谓词条件为真,则禁止执行阶段执行后续指令的指令操作 。

    TABLE FETCH PROCESSOR INSTRUCTION USING TABLE NUMBER TO BASE ADDRESS TRANSLATION
    148.
    发明申请
    TABLE FETCH PROCESSOR INSTRUCTION USING TABLE NUMBER TO BASE ADDRESS TRANSLATION 审中-公开
    表格处理器指令使用表编号进行基址转换

    公开(公告)号:US20150317163A1

    公开(公告)日:2015-11-05

    申请号:US14267342

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    KICK-STARTED RUN-TO-COMPLETION PROCESSOR HAVING NO INSTRUCTION COUNTER
    149.
    发明申请
    KICK-STARTED RUN-TO-COMPLETION PROCESSOR HAVING NO INSTRUCTION COUNTER 审中-公开
    没有指令计数器的踢动运行完成处理器

    公开(公告)号:US20150317160A1

    公开(公告)日:2015-11-05

    申请号:US14267298

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY
    150.
    发明申请
    NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY 有权
    网络接口设备将虚拟NIDS的配置信息的总线写入主机到小型内存中

    公开(公告)号:US20150220449A1

    公开(公告)日:2015-08-06

    申请号:US14172844

    申请日:2014-02-04

    Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. A virtual NID is configured by configuration information in an appropriate one of a set of smaller blocks in a high-speed memory on the NID. There is a smaller block for each virtual NID. A virtual machine on the host can configure its virtual NID by writing configuration information into a larger block in PCIe address space. Circuitry on the NID detects that the PCIe write is into address space occupied by the larger blocks. If the write is into this space, then address translation circuitry converts the PCIe address into a smaller address that maps to the appropriate one of the smaller blocks associated with the virtual NID to be configured. If the PCIe write is detected not to be an access of a larger block, then the NID does not perform the address translation.

    Abstract translation: 网络托管服务器的网络接口设备(NID)实现多个虚拟NID。 虚拟NID由NID中的高速存储器中的一组较小块中的适当的一个配置信息配置。 每个虚拟NID都有一个较小的块。 主机上的虚拟机可以通过将配置信息写入PCIe地址空间中的较大块来配置其虚拟NID。 NID上的电路检测到PCIe写入到较大块占用的地址空间中。 如果写入该空间,则地址转换电路将PCIe地址转换成较小的地址,该地址映射到与要配置的虚拟NID相关联的较小块中的适当的一个。 如果检测到PCIe写入不是较大块的访问,则NID不执行地址转换。

Patent Agency Ranking