TRANSACTIONAL MEMORY THAT SUPPORTS A GET FROM ONE OF A SET OF RINGS COMMAND
    101.
    发明申请
    TRANSACTIONAL MEMORY THAT SUPPORTS A GET FROM ONE OF A SET OF RINGS COMMAND 有权
    支持从一组环形指令获取的交易记忆

    公开(公告)号:US20150089165A1

    公开(公告)日:2015-03-26

    申请号:US14037239

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/3836 G06F9/3004 H04L45/74

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    Abstract translation: 事务存储器(TM)包括控制电路管线和相关联的存储器单元。 存储单元存储多个环。 对于每个环,流水线保持头指针和尾指针。 管道的环操作阶段将维护指针,因为值被放置在环上并被取消。 如果环未满,则put命令会使TM将值放入环中。 如果环不为空,则get命令使TM取环, 如果环具有至少预定量的可用缓冲空间,则具有低优先级命令的put将导致TM将值放入环中。 从一组ring命令获取,使TM从最高优先级非空环(指定的一组环)获取一个值。

    STORING AN ENTROPY SIGNAL FROM A SELF-TIMED LOGIC BIT STREAM GENERATOR IN AN ENTROPY STORAGE RING
    102.
    发明申请
    STORING AN ENTROPY SIGNAL FROM A SELF-TIMED LOGIC BIT STREAM GENERATOR IN AN ENTROPY STORAGE RING 有权
    在熵存储环境中存储自定义逻辑位流发生器的入侵信号

    公开(公告)号:US20150088950A1

    公开(公告)日:2015-03-26

    申请号:US14037312

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    CPC classification number: G06F7/584

    Abstract: A Self-Timed Logic Entropy Bit Stream Generator (STLEBSG) outputs a bit stream having non-deterministic entropy. The bit stream is supplied onto an input of a signal storage ring so that entropy of the bit stream is then stored in the ring as the bit stream circulates in the ring. Depending on the configuration of the ring, the bit stream as it circulates undergoes permutations, but the signal storage ring nonetheless stores the entropy of the injected bit stream. In one example, the STLEBSG is disabled and the bit stream is no longer supplied to the ring, but the ring continues to circulate and stores entropy of the original bit stream. With the STLEBSG disabled, a signal output from the ring is used to generate one or more random numbers.

    Abstract translation: 自定时逻辑熵比特流发生器(STLEBSG)输出具有非确定性熵的比特流。 比特流被提供到信号存储环的输入端,使得当比特流在环中循环时,比特流的熵随后被存储在环中。 根据环的配置,其循环中的位流经历置换,但是信号存储环仍然存储注入的比特流的熵。 在一个示例中,STLEBSG被禁用并且比特流不再被提供给环,但是环继续循环并存储原始比特流的熵。 禁用STLEBSG时,使用来自环的信号输出来产生一个或多个随机数。

    Script-Controlled Egress Packet Modifier
    103.
    发明申请
    Script-Controlled Egress Packet Modifier 有权
    脚本控制出口数据包修改器

    公开(公告)号:US20150016458A1

    公开(公告)日:2015-01-15

    申请号:US13941494

    申请日:2013-07-14

    Abstract: An egress packet modifier includes a script parser and a pipeline of processing stages. Rather than performing egress modifications using a processor that fetches and decodes and executes instructions in a classic processor fashion, and rather than storing a packet in memory and reading it out and modifying it and writing it back, the packet modifier pipeline processes the packet by passing parts of the packet through the pipeline. A processor identifies particular egress modifications to be performed by placing a script code at the beginning of the packet. The script parser then uses the code to identify a specific script of opcodes, where each opcode defines a modification. As a part passes through a stage, the stage can carry out the modification of such an opcode. As realized using current semiconductor fabrication process, the packet modifier can modify 200M packets/second at a sustained rate of up to 100 gigabits/second.

    Abstract translation: 出口分组修饰符包括脚本解析器和处理阶段的流水线。 而不是使用处理器执行出口修改,处理器以经典处理器的方式获取和解码并执行指令,而不是将数据包存储在存储器中并将其读出并修改它并将其写回来,数据包修改器流水线通过传递来处理数据包 部分数据包通过管道。 处理器通过将脚本代码放置在分组的开始处来识别要执行的特定出口修改。 脚本解析器然后使用代码来识别操作码的特定脚本,其中每个操作码定义一个修改。 作为一个阶段,舞台可以进行这样一个操作码的修改。 通过使用当前的半导体制造工艺实现,分组修改器可以以高达100吉比特/秒的持续速率修改200M分组/秒。

    TRANSACTIONAL MEMORY THAT PERFORMS A PMM 32-BIT LOOKUP OPERATION
    104.
    发明申请
    TRANSACTIONAL MEMORY THAT PERFORMS A PMM 32-BIT LOOKUP OPERATION 有权
    执行PMM 32位查找操作的交互式存储器

    公开(公告)号:US20140136798A1

    公开(公告)日:2014-05-15

    申请号:US13675394

    申请日:2012-11-13

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括一个内存地址。 响应该命令,TM拉动输入值(IV)。 存储器地址用于从存储器读取包含多个结果值(RV),多个引用值和多个前缀值的单词。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV的部分是查询键值(LKV)。 基于前缀值生成掩码值。 LKV由每个掩码值屏蔽,从而产生与参考值进行比较的多个掩蔽值。 基于比较,查找表生成用于选择结果值的选择器值。 所选择的结果值然后经由总线传送到处理器。

    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM
    105.
    发明申请
    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM 有权
    分组间隔预测学习算法

    公开(公告)号:US20140133320A1

    公开(公告)日:2014-05-15

    申请号:US13675620

    申请日:2012-11-13

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Abstract translation: 设备接收作为流对的一部分的数据包,每个数据包共享一个应用协议。 设备通过对数据包执行深度数据包检测(DPI)来确定数据包的应用协议。 数据包大小被测量并转换成数据包大小状态。 分组大小状态,分组序列号和分组流方向用于创建应用协议估计表(APET)。 在正常操作期间使用APET来估计流对的应用协议,而不执行耗时的DPI。 然后,设备确定接收到的分组之间的分组间间隔。 分组间间隔被转换成分组间间隔状态。 分组间间隔状态和分组序列号用于创建分组间间隔预测表。 然后,设备为每个应用协议存储分组间间隔预测表。 在操作期间使用分组间间隔预测表来预测分组之间的分组间间隔。

    Transactional memory that performs a statistics add-and-update operation

    公开(公告)号:US10659030B1

    公开(公告)日:2020-05-19

    申请号:US15874860

    申请日:2018-01-18

    Abstract: A transactional memory (TM) of an island-based network flow processor (IB-NFP) integrated circuit receives a Stats Add-and-Update (AU) command across a command mesh of a Command/Push/Pull (CPP) data bus from a processor. A memory unit of the TM stores a plurality of first values in a corresponding set of memory locations. A hardware engine of the TM receives the AU, performs a pull across other meshes of the CPP bus thereby obtaining a set of addresses, uses the pulled addresses to read the first values out of the memory unit, adds the same second value to each of the first values thereby generating a corresponding set of updated first values, and causes the set of updated first values to be written back into the plurality of memory locations. Even though multiple count values are updated, there is only one bus transaction value sent across the CPP bus command mesh.

    Pop stack absolute instruction
    107.
    发明授权

    公开(公告)号:US10474465B2

    公开(公告)日:2019-11-12

    申请号:US14267362

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor executes a pop stack absolute instruction. The instruction includes an opcode, an absolute pointer value, a flag don't touch bit, and predicate bits. If a condition indicated by the predicate bits is not true, then the opcode operation is not performed. If the condition is true, then the stack of the processor is popped thereby generating an operand A. The absolute pointer value is used to identify a particular register of the stack, and the content of that particular register is an operand B. The arithmetic logic operation specified by the opcode is performed using operand A and operand B thereby generating a result, and the content of the particular register is replaced with the result. If the flag don't touch bit is set to a particular value, then the flag bits (carry flag and zero flag) are not affected by the instruction execution.

    Multiprocessor system having fast clocking prefetch circuits that cause processor clock signals to be gapped

    公开(公告)号:US10365681B1

    公开(公告)日:2019-07-30

    申请号:US15256588

    申请日:2016-09-04

    Inventor: Gavin J. Stark

    Abstract: A multiprocessor system includes several processors, a prefetching instruction code interface block, a prefetching data code interface block, a Shared Local Memory (SLMEM), and Clock Gapping Circuits (CGCs). Each processor has the same address map. Each fetches instructions from SLMEM via the instruction interface block. Each accesses data from/to SLMEM via the data interface block. The interface blocks and the SLMEM are clocked at a faster rate than the processors. The interface blocks have wide prefetch lines of the width of the SLMEM. The data interface block supports no-wait single-byte data writes from the processors, and also supports no-wait multi-byte data writes. An address translator prevents one processor from overwriting the stack of another. If a requested instruction or data is not available in the appropriate prefetching circuit, then the clock signal of the requesting processor is gapped until the instruction or data can be returned to the requesting processor.

    NFA completion notification
    109.
    发明授权

    公开(公告)号:US10362093B2

    公开(公告)日:2019-07-23

    申请号:US14151699

    申请日:2014-01-09

    Abstract: Multiple processors share access, via a bus, to a pipelined NFA engine. The NFA engine can implement an NFA of the type that is not a DFA (namely, it can be in multiple states at the same time). One of the processors communicates a configuration command, a go command, and an event generate command across the bus to the NFA engine. The event generate command includes a reference value. The configuration command causes the NFA engine to be configured. The go command causes the configured NFA engine to perform a particular NFA operation. Upon completion of the NFA operation, the event generate command causes the reference value to be returned back across the bus to the processor.

    Multiprocessor system having posted transaction bus interface that generates posted transaction bus commands

    公开(公告)号:US10191867B1

    公开(公告)日:2019-01-29

    申请号:US15256583

    申请日:2016-09-04

    Inventor: Gavin J. Stark

    Abstract: A multiprocessor system includes several processors, a Shared Local Memory (SLMEM), and an interface circuit for interfacing the system to an external posted transaction bus. Each processor has the same address map. Each fetches instructions from SLMEM, and accesses data from/to SLMEM. A processor can initiate a read transaction on the posted transaction bus by doing an AHB-S bus write to a particular address. The AHB-S write determines the type of transaction initiated and also specifies an address in a shared memory in the interface circuit. The interface circuit uses information from the AHB-S write to generate a command of the correct format. The interface circuit outputs the command onto the posted transaction bus, and then receives read data back from the posted transaction bus, and then puts the read data into the shared memory at the address specified by the processor in the original AHB-S bus write.

Patent Agency Ranking