Multi-processor system having tripwire data merging and collision detection
    51.
    发明授权
    Multi-processor system having tripwire data merging and collision detection 有权
    具有Tripwire数据合并和碰撞检测的多处理器系统

    公开(公告)号:US09495158B2

    公开(公告)日:2016-11-15

    申请号:US14311217

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: An integrated circuit includes a pool of processors and a Tripwire Data Merging and Collision Detection Circuit (TDMCDC). Each processor has a special tripwire bus port. Execution of a novel tripwire instruction causes the processor to output a tripwire value onto its tripwire bus port. Each respective tripwire bus port is coupled to a corresponding respective one of a plurality of tripwire bus inputs of the TDMCDC. The TDMCDC receives tripwire values from the processors and communicates them onto a consolidated tripwire bus. From the consolidated bus the values are communicated out of the integrated circuit and to a debug station. If more than one processor outputs a valid tripwire value at a given time, then the TDMCDC asserts a collision bit signal that is communicated along with the tripwire value. Receiving tripwire values onto the debug station facilitates use of the debug station in monitoring and debugging processor code.

    Abstract translation: 集成电路包括处理器池和Tripwire数据合并与冲突检测电路(TDMCDC)。 每个处理器都有一个特殊的tripwire总线端口。 执行新的tripwire指令使处理器将绊线值输出到其绊线总线端口上。 每个相应的绊销总线端口耦合到TDMCDC的多个绊线总线输入中的对应的相应的一个。 TDMCDC从处理器接收绊线值,并将其传送到综合的绊线总线上。 从整合的总线,将值从集成电路传送到调试台。 如果多个处理器在给定时间输出有效的绊线值,则TDMCDC断言与绊线值一起传送的冲突位信号。 将tripwire值接收到调试台便于使用调试台监视和调试处理器代码。

    Instantaneous random early detection packet dropping with drop precedence
    52.
    发明授权
    Instantaneous random early detection packet dropping with drop precedence 有权
    瞬时随机早期检测丢包,丢弃优先级

    公开(公告)号:US09485195B2

    公开(公告)日:2016-11-01

    申请号:US14507602

    申请日:2014-10-06

    CPC classification number: H04L49/00

    Abstract: A circuit that receives queue number that indicates a queue stored within a memory unit and a packet descriptor that includes a drop precedence value, and in response determines an instantaneous queue depth of the queue. The instantaneous queue depth and drop precedence value are used to determine a drop probability. The drop probability is used to randomly determine if the packet descriptor should be stored in the queue. When a packet descriptor is not stored in a queue the packet associated with the packet descriptor is dropped. The queue has a first queue depth range. A first drop probability is used when the queue depth is within the first queue depth range and the drop precedence is equal to the first value. A second drop probability is used when the queue depth is within the first queue depth range and the drop precedence equal to a second value.

    Abstract translation: 接收指示存储在存储器单元中的队列的队列号的电路和包括丢弃优先级值的包描述符,并且作为响应确定队列的瞬时队列深度。 瞬时队列深度和丢弃优先级值用于确定丢弃概率。 丢弃概率用于随机确定包描述符是否应该存储在队列中。 当分组描述符不存储在队列中时,与分组描述符关联的分组被丢弃。 队列具有第一个队列深度范围。 当队列深度在第一队列深度范围内且丢弃优先级等于第一个值时,将使用第一个丢弃概率。 当队列深度在第一队列深度范围内并且丢弃优先级等于第二值时,使用第二丢弃概率。

    Efficient complex network traffic management in a non-uniform memory system

    公开(公告)号:US09304706B2

    公开(公告)日:2016-04-05

    申请号:US14631784

    申请日:2015-02-25

    Abstract: A network appliance includes a first processor, a second processor, a first storage device, and a second storage device. A first status information is stored in the first storage device. The first processor is coupled to the first storage device. A queue of data is stored in the second storage device. The first status information indicates if traffic data stored in the queue of data is permitted to be transmitted. The second processor is coupled to the second storage device. The first processor communicates with the second processor. The traffic data includes packet information. The first storage device is a high speed memory only accessible to the first processor. The second storage device is a high capacity memory accessible to multiple processors. The first status information is a permitted bit that indicates if the traffic data within the queue of data is permitted to be transmitted.

    Transactional memory that supports a put with low priority ring command

    公开(公告)号:US09280297B1

    公开(公告)日:2016-03-08

    申请号:US14631804

    申请日:2015-02-25

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    Picoengine pool transactional memory architecture
    55.
    发明授权
    Picoengine pool transactional memory architecture 有权
    Picoengine池事务内存架构

    公开(公告)号:US09268600B2

    公开(公告)日:2016-02-23

    申请号:US13970601

    申请日:2013-08-20

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/467 G06F15/163 H04L45/745 H04L45/7457

    Abstract: A transactional memory (TM) includes a selectable bank of hardware algorithm prework engines, a selectable bank of hardware lookup engines, and a memory unit. The memory unit stores result values (RVs), instructions, and lookup data operands. The transactional memory receives a lookup command across a bus from one of a plurality of processors. The lookup command includes a source identification value, data, a table number value, and a table set value. In response to the lookup command, the transactional memory selects one hardware algorithm prework engine and one hardware lookup engine to perform the lookup operation. The selected hardware algorithm prework engine modifies data included in the lookup command. The selected hardware lookup engine performs a lookup operation using the modified data and lookup operands provided by the memory unit. In response to performing the lookup operation, the transactional memory returns a result value and optionally an instruction.

    Abstract translation: 事务存储器(TM)包括可选择的硬件算法预处理引擎组,可选择的硬件查找引擎组和存储器单元。 存储单元存储结果值(RV),指令和查找数据操作数。 事务存储器从多个处理器之一接收总线上的查找命令。 查找命令包括源标识值,数据,表号值和表设置值。 响应于查找命令,事务存储器选择一个硬件算法预处理引擎和一个硬件查找引擎来执行查找操作。 所选的硬件算法预处理引擎修改查找命令中包含的数据。 所选择的硬件查找引擎使用由存储器单元提供的经修改的数据和查找操作数来执行查找操作。 响应于执行查找操作,事务存储器返回结果值和可选的指令。

    Merging PCP flows as they are assigned to a single virtual channel
    56.
    发明授权
    Merging PCP flows as they are assigned to a single virtual channel 有权
    合并PCP流,因为它们被分配到单个虚拟通道

    公开(公告)号:US09264256B2

    公开(公告)日:2016-02-16

    申请号:US14321732

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/4625 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a first plurality of physical MAC ports, one or more PCP (Priority Code Point) flows. The NFP also maintains, for each of a second plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the port enqueue engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each port enqueue engine has a lookup table circuit that is configurable to cause multiple PCP flows to be merged so that the frame data for the multiple flows is all assigned to the same one virtual channel. Due to the PCP flow merging, the second number can be smaller than the first number multiplied by eight.

    Abstract translation: 网络流处理器(NFP)集成电路经由第一多个物理MAC端口中的每一个接收一个或多个PCP(优先权码点)流。 对于第二多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,端口入队引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个端口入队引擎具有可配置为使多个PCP流合并的查找表电路,使得多个流的帧数据都被分配给相同的一个虚拟通道。 由于PCP流合并,第二个数字可以小于第一个数乘以8。

    POP STACK ABSOLUTE INSTRUCTION
    57.
    发明申请
    POP STACK ABSOLUTE INSTRUCTION 审中-公开
    POP堆栈绝对指令

    公开(公告)号:US20150317159A1

    公开(公告)日:2015-11-05

    申请号:US14267362

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor executes a pop stack absolute instruction. The instruction includes an opcode, an absolute pointer value, a flag don't touch bit, and predicate bits. If a condition indicated by the predicate bits is not true, then the opcode operation is not performed. If the condition is true, then the stack of the processor is popped thereby generating an operand A. The absolute pointer value is used to identify a particular register of the stack, and the content of that particular register is an operand B. The arithmetic logic operation specified by the opcode is performed using operand A and operand B thereby generating a result, and the content of the particular register is replaced with the result. If the flag don't touch bit is set to a particular value, then the flag bits (carry flag and zero flag) are not affected by execution of the instruction.

    Abstract translation: 流水线运行到完成处理器执行弹出堆栈绝对指令。 该指令包括操作码,绝对指针值,标志不触摸位和谓词位。 如果谓词位指示的条件不为真,则不执行操作码操作。 如果条件为真,则处理器的堆栈被弹出,从而生成操作数A.绝对指针值用于标识堆栈的特定寄存器,并且该特定寄存器的内容是操作数B.算术逻辑 使用操作数A和操作数B执行由操作码指定的操作,从而生成结果,并且将特定寄存器的内容替换为结果。 如果标志不触摸位设置为特定值,则标志位(进位标志和零标志)不受指令执行的影响。

    GUARANTEED IN-ORDER PACKET DELIVERY
    58.
    发明申请
    GUARANTEED IN-ORDER PACKET DELIVERY 有权
    保证订单分发

    公开(公告)号:US20150237180A1

    公开(公告)日:2015-08-20

    申请号:US14184455

    申请日:2014-02-19

    Abstract: Circuitry to provide in-order packet delivery. A packet descriptor including a sequence number is received. It is determined in which of three ranges the sequence number resides. Depending, at least in part, on the range in which the sequence number resides it is determined if the packet descriptor is to be communicated to a scheduler which causes an associated packet to be transmitted. If the sequence number resides in a first “flush” range, all associated packet descriptors are output. If the sequence number resides in a second “send” range, only the received packet descriptor is output. If the sequence number resides in a third “store and reorder” range and the sequence number is the next in-order sequence number the packet descriptor is output; if the sequence number is not the next in-order sequence number the packet descriptor is stored in a buffer and a corresponding valid bit is set.

    Abstract translation: 电路提供按顺序分组传送。 接收包括序列号的分组描述符。 确定序列号所在的三个范围中的哪一个。 至少部分地依赖于序列号所在的范围,确定分组描述符是否被传送到导致相关分组被发送的调度器。 如果序列号位于第一个“刷新”范围内,则输出所有关联的数据包描述符。 如果序列号位于第二个“发送”范围内,则仅输出接收到的包描述符。 如果序列号位于第三个“存储和重新排序”范围,并且序列号是下一个顺序序列号,则输出数据包描述符; 如果序列号不是下一个顺序序列号,则分组描述符被存储在缓冲器中并且相应的有效位被置位。

    NETWORK INTERFACE DEVICE THAT ALERTS A MONITORING PROCESSOR IF CONFIGURATION OF A VIRTUAL NID IS CHANGED
    59.
    发明申请
    NETWORK INTERFACE DEVICE THAT ALERTS A MONITORING PROCESSOR IF CONFIGURATION OF A VIRTUAL NID IS CHANGED 审中-公开
    如果虚拟NID的配置已更改,则提醒监视处理器的网络接口设备

    公开(公告)号:US20150222513A1

    公开(公告)日:2015-08-06

    申请号:US14172851

    申请日:2014-02-04

    CPC classification number: H04L41/0866 H04L41/0806 H04L49/65 H04L49/70

    Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. For each virtual NID there is a block in a memory of a transactional memory on the NID. This block stores configuration information that configures the corresponding virtual NID. The NID also has a single managing processor that monitors configuration of the plurality of virtual NIDs. If there is a write into the memory space where the configuration information for the virtual NIDs is stored, then the transactional memory detects this write and in response sends an alert to the managing processor. The size and location of the memory space in the memory for which write alerts are to be generated is programmable. The content and destination of the alert is also programmable.

    Abstract translation: 网络托管服务器的网络接口设备(NID)实现多个虚拟NID。 对于每个虚拟NID,在NID上的事务存储器的存储器中存在一个块。 该块存储配置相应的虚拟NID的配置信息。 NID还具有监视多个虚拟NID的配置的单个管理处理器。 如果写入存储有虚拟NID的配置信息的存储器空间,则事务存储器检测该写入,并且响应于向管理处理器发送警报。 要生成写入警报的存储器中的存储空间的大小和位置是可编程的。 警报的内容和目的地也是可编程的。

    Entropy storage ring having stages with feedback inputs
    60.
    发明授权
    Entropy storage ring having stages with feedback inputs 有权
    具有反馈输入级的熵存储环

    公开(公告)号:US09092284B2

    公开(公告)日:2015-07-28

    申请号:US14037319

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    CPC classification number: G06F7/58

    Abstract: An entropy storage ring includes an input node, a plurality of serial-connected stages, and an output node. Each stage includes an XOR (or XNOR) circuit, a delay element having an input coupled to the XOR output, and a combinatorial circuit having an output coupled to a second input of the XOR. The combinatorial circuit may be a NAND, NOR, AND or OR gate. A first input of the XOR is the data input of the stage. The output of the delay element is the data output of the stage. A first input of the combinatorial circuit is coupled to receive an enable bit from a configuration register. A second input of the combinatorial circuit is coupled to the ring output node. In operation, a bit stream is supplied onto the ring input node. Feedback of multiple stages are enabled so that the bit stream undergoes complex permutation as it circulates.

    Abstract translation: 熵存储环包括输入节点,多个串联级和输出节点。 每个级包括XOR(或XNOR)电路,具有耦合到XOR输出的输入的延迟元件,以及具有耦合到XOR的第二输入的输出的组合电路。 组合电路可以是NAND,NOR,或或或门。 XOR的第一个输入是舞台的数据输入。 延迟元件的输出是级的数据输出。 组合电路的第一输入被耦合以从配置寄存器接收使能位。 组合电路的第二输入耦合到环形输出节点。 在操作中,位流被提供到环形输入节点上。 启用多级的反馈,使得位流在其循环时经历复杂的置换。

Patent Agency Ranking