Chained-instruction dispatcher
    11.
    发明授权

    公开(公告)号:US10031758B2

    公开(公告)日:2018-07-24

    申请号:US14231028

    申请日:2014-03-31

    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.

    Using a credits available value in determining whether to issue a PPI allocation request to a packet engine

    公开(公告)号:US09665519B2

    公开(公告)日:2017-05-30

    申请号:US14591003

    申请日:2015-01-07

    CPC classification number: G06F13/4022 G06F13/4027 G06F13/4221

    Abstract: In response to receiving a “Return Available PPI Credits” command from a credit-aware (CA) device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the CA device, and zeroes out its stored CTBR value. The CA device adds the credits returned to a “Credits Available” value it maintains. The CA device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another aspect, the CA device issues one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.

    Guaranteed in-order packet delivery
    14.
    发明授权
    Guaranteed in-order packet delivery 有权
    保证按顺序分组传送

    公开(公告)号:US09584637B2

    公开(公告)日:2017-02-28

    申请号:US14184455

    申请日:2014-02-19

    Abstract: Circuitry to provide in-order packet delivery. A packet descriptor including a sequence number is received. It is determined in which of three ranges the sequence number resides. Depending, at least in part, on the range in which the sequence number resides it is determined if the packet descriptor is to be communicated to a scheduler which causes an associated packet to be transmitted. If the sequence number resides in a first “flush” range, all associated packet descriptors are output. If the sequence number resides in a second “send” range, only the received packet descriptor is output. If the sequence number resides in a third “store and reorder” range and the sequence number is the next in-order sequence number the packet descriptor is output; if the sequence number is not the next in-order sequence number the packet descriptor is stored in a buffer and a corresponding valid bit is set.

    Abstract translation: 电路提供按顺序分组传送。 接收包括序列号的分组描述符。 确定序列号所在的三个范围中的哪一个。 至少部分地依赖于序列号所在的范围,确定分组描述符是否被传送到导致相关分组被发送的调度器。 如果序列号位于第一个“刷新”范围内,则输出所有关联的数据包描述符。 如果序列号位于第二个“发送”范围内,则仅输出接收到的包描述符。 如果序列号位于第三个“存储和重新排序”范围,并且序列号是下一个顺序序列号,则输出数据包描述符; 如果序列号不是下一个顺序序列号,则分组描述符被存储在缓冲器中并且相应的有效位被置位。

    Transactional memory having local CAM and NFA resources
    15.
    发明授权
    Transactional memory having local CAM and NFA resources 有权
    具有本地CAM和NFA资源的事务性内存

    公开(公告)号:US09465651B2

    公开(公告)日:2016-10-11

    申请号:US14151677

    申请日:2014-01-09

    CPC classification number: G06F9/467

    Abstract: A remote processor interacts with a transactional memory that has a memory, local BWC (Byte-Wise Compare) resources, and local NFA (Non-deterministic Finite Automaton) engine resources. The processor causes a byte stream to be transferred into the transactional memory and into the memory. The processor then uses the BWC circuit to find a character signature in the byte stream. The processor obtains information about the character signature from the BWC circuit, and based on the information uses the NFA engine to process the byte stream starting at a byte position determined based at least in part on the results of the BWC circuit. From the time the byte stream is initially written into the transactional memory until the time the NFA engine completes, the byte stream is not read out of the transactional memory.

    Abstract translation: 远程处理器与具有内存,本地BWC(Byte-Wise Compare)资源和本地NFA(非确定性有限自动机)引擎资源的事务内存进行交互。 处理器使得字节流被传送到事务存储器并进入存储器。 然后,处理器使用BWC电路在字节流中找到字符签名。 处理器从BWC电路获得关于字符签名的信息,并且基于该信息使用NFA引擎来处理从至少部分基于BWC电路的结果确定的字节位置开始的字节流。 从字节流最初写入事务存储器直到NFA引擎完成时,字节流不会从事务存储器中读出。

    NFA completion notification
    16.
    发明授权

    公开(公告)号:US10362093B2

    公开(公告)日:2019-07-23

    申请号:US14151699

    申请日:2014-01-09

    Abstract: Multiple processors share access, via a bus, to a pipelined NFA engine. The NFA engine can implement an NFA of the type that is not a DFA (namely, it can be in multiple states at the same time). One of the processors communicates a configuration command, a go command, and an event generate command across the bus to the NFA engine. The event generate command includes a reference value. The configuration command causes the NFA engine to be configured. The go command causes the configured NFA engine to perform a particular NFA operation. Upon completion of the NFA operation, the event generate command causes the reference value to be returned back across the bus to the processor.

    Distributed credit FIFO link of a configurable mesh data bus

    公开(公告)号:US09971720B1

    公开(公告)日:2018-05-15

    申请号:US14724820

    申请日:2015-05-29

    CPC classification number: G06F13/4022 G06F13/00 H04L47/39 H04L49/901

    Abstract: An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.

    Return available PPI credits command

    公开(公告)号:US09703739B2

    公开(公告)日:2017-07-11

    申请号:US14590920

    申请日:2015-01-06

    CPC classification number: G06F13/4022 G06F13/4027 G06F13/4221

    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.

    USING A CREDITS AVAILABLE VALUE IN DETERMINING WHETHER TO ISSUE A PPI ALLOCATION REQUEST TO A PACKET ENGINE
    19.
    发明申请
    USING A CREDITS AVAILABLE VALUE IN DETERMINING WHETHER TO ISSUE A PPI ALLOCATION REQUEST TO A PACKET ENGINE 有权
    使用可用价值确定无论是否向包装发动机发出PPI分配请求

    公开(公告)号:US20160055111A1

    公开(公告)日:2016-02-25

    申请号:US14591003

    申请日:2015-01-07

    CPC classification number: G06F13/4022 G06F13/4027 G06F13/4221

    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.

    Abstract translation: 响应于从信用感知设备接收到一个新颖的“可返回PPI信用”命令,数据包引擎将为该设备维护的“信用回报”(CTBR)值发送回信用感知设备,并将零值 存储CTBR值。 信用感知设备将返回的信用额度添加到维护的“可用信用额”值。 信用感知设备使用“可用点数”值来确定是否可以发出PPI分配请求。 “可返回可用的PPI积分”命令不会导致任何PPI分配或解除分配。 在另一个新颖的方面,当信用感知设备的记录“可用可用”值为零或否定时,信用感知设备被允许向分组引擎发出一个PPI分配请求。 如果PPI分配请求不能被授权,则缓冲在分组引擎中,并在分组引擎内重新提交,直到分组引擎进行PPI分配。

    TRANSACTIONAL MEMORY HAVING LOCAL CAM AND NFA RESOURCES
    20.
    发明申请
    TRANSACTIONAL MEMORY HAVING LOCAL CAM AND NFA RESOURCES 有权
    具有本地CAM和NFA资源的交易记忆

    公开(公告)号:US20150193266A1

    公开(公告)日:2015-07-09

    申请号:US14151677

    申请日:2014-01-09

    CPC classification number: G06F9/467

    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.

    Abstract translation: 自动机硬件引擎采用组织成2n行的转换表,其中每行包括多个n位存储位置,并且其中每个存储位置最多可以存储一个n位输入值。 每行对应于自动机状态。 在一个示例中,至少两个NFA被编码到表中。 第一个NFA以第一种方式索引到转换表的行中,第二个NFA以第二种方式索引到转换表的行中。 由于此索引,所有行都可用于存储指向其他行的条目值。

Patent Agency Ranking