Credit-based resource allocator circuit
    101.
    发明授权
    Credit-based resource allocator circuit 有权
    信用资源分配器电路

    公开(公告)号:US09282051B2

    公开(公告)日:2016-03-08

    申请号:US13928235

    申请日:2013-06-26

    CPC classification number: H04L47/39 H04L47/822

    Abstract: A high-speed credit-based allocator circuit receives an allocation request to make an allocation to one of a set of a processing entities. The allocator circuit maintains a chain of bubble sorting module circuits for the set, where each bubble sorting module circuit stores a resource value and an indication of a corresponding processing entity. A bubble sorting operation is performed so that the head of the chain tends to indicate the processing entity of the set that has the highest amount of the resource (credit) available. The allocation requested is made to the processing entity indicated by the head module circuit of the chain. The amount of the resource available to each processing entity is tracked by adjusting the resource values as allocations are made, and as allocated tasks are completed. The allocator circuit is configurable to maintain multiple chains, thereby supporting credit-based allocations to multiple sets of processing entities.

    Abstract translation: 高速信用分配器电路接收分配请求以对一组处理实体之一进行分配。 分配器电路为该组保持一连串的气泡分类模块电路,其中每个气泡分选模块电路存储资源值和对应的处理实体的指示。 执行气泡排序操作,使得链的头倾向于指示具有最高资源量(信用)可用的集合的处理实体。 所请求的分配是由链的头模块电路指示的处理实体。 每个处理实体可用的资源量通过调整资源值进行跟踪,当分配完成时,并且分配的任务完成。 分配器电路可配置为维护多个链,从而支持基于信用的分配给多组处理实体。

    Inverse PCP flow remapping for PFC pause frame generation
    102.
    发明授权
    Inverse PCP flow remapping for PFC pause frame generation 有权
    用于PFC暂停帧生成的逆PCP流重映射

    公开(公告)号:US09258256B2

    公开(公告)日:2016-02-09

    申请号:US14321762

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L49/3045 H04L49/9005 H04L49/9042 Y02P80/112

    Abstract: An overflow threshold value is stored for each of a plurality of virtual channels. A link manager maintains, for each virtual channel, a buffer count. If the buffer count for a virtual channel is detected to exceed the overflow threshold value for a virtual channel whose originating PCP flows were merged, then a PFC (Priority Flow Control) pause frame is generated where multiple ones of the priority class enable bits are set to indicate that multiple PCP flows should be paused. For the particular virtual channel that is overloaded, an Inverse PCP Remap LUT (IPRLUT) circuit performs inverse PCP mapping, including merging and/or reordering mapping, and outputs an indication of each of those PCP flows that is associated with the overloaded virtual channel. Associated physical MAC port circuitry uses this information to generate the PFC pause frame so that the appropriate multiple enable bits are set in the pause frame.

    Abstract translation: 为多个虚拟通道中的每一个存储溢出阈值。 链路管理器为每个虚拟通道维护缓冲区计数。 如果检测到虚拟通道的缓冲器计数超过其始发PCP流合并的虚拟通道的溢出阈值,则生成PFC(优先级流控制)暂停帧,其中设置了多个优先级使能位 以指示应暂停多个PCP流。 对于重载的特定虚拟信道,反PCP重映射LUT(IPRLUT)电路执行反PCP映射,包括合并和/或重新排序映射,并且输出与重载虚拟信道相关联的每个PCP流的指示。 相关的物理MAC端口电路使用该信息来生成PFC暂停帧,使得在暂停帧中设置适当的多个使能位。

    Island-based network flow processor integrated circuit
    103.
    发明授权
    Island-based network flow processor integrated circuit 有权
    基于岛屿的网络流处理器集成电路

    公开(公告)号:US09237095B2

    公开(公告)日:2016-01-12

    申请号:US13399888

    申请日:2012-02-17

    CPC classification number: H04L45/50 G06F15/7867 Y10T29/49124

    Abstract: A reconfigurable, scalable and flexible island-based network flow processor integrated circuit architecture includes a plurality of rectangular islands of identical shape and size. The islands are disposed in rows, and a configurable mesh command/push/pull data bus extends through all the islands. The integrated circuit includes first SerDes I/O blocks, an ingress MAC island that converts incoming symbols into packets, an ingress NBI island that analyzes packets and generates ingress packet descriptors, a microengine (ME) island that receives ingress packet descriptors and headers from the ingress NBI and analyzes the headers, a memory unit (MU) island that receives payloads from the ingress NBI and performs lookup operations and stores payloads, an egress NBI island that receives the header portions and the payload portions and egress descriptors and performs egress scheduling, and an egress MAC island that outputs packets to second SerDes I/O blocks.

    Abstract translation: 可重构,可扩展和灵活的基于岛的网络流处理器集成电路架构包括多个相同形状和大小的矩形岛。 岛排列成行,并且可配置的网格命令/推/拉数据总线延伸穿过所有岛。 该集成电路包括第一个SerDes I / O块,一个将输入符号转换成数据包的入口MAC岛,一个分析数据包并产生入口包描述符的入口NBI岛,一个微型引擎(ME)岛,接收入口数据包描述符和头 入口NBI并分析头部,存储单元(MU)岛,其从入口NBI接收有效载荷并执行查找操作并存储有效载荷;接收标题部分和有效载荷部分和出口描述符并执行出口调度的出口NBI岛, 以及向第二SerDes I / O块输出数据包的出口MAC岛。

    DDR retiming circuit
    104.
    发明授权
    DDR retiming circuit 有权
    DDR重定时电路

    公开(公告)号:US09208844B1

    公开(公告)日:2015-12-08

    申请号:US14448841

    申请日:2014-07-31

    CPC classification number: G11C11/4093 G11C7/1084 G11C7/1093 G11C7/222

    Abstract: An integrated circuit receives a DDR (Double Data Rate) data signal and an associated DDR clock signal, and communicates those signals from integrated circuit input terminals a substantial distance across the integrated circuit to a subcircuit that then receives and uses the DDR data. Within the integrated circuit, a DDR retiming circuit receives the DDR data signal and the associated DDR clock signal from the terminals. The DDR retiming circuit splits the DDR data signal into two components, and then transmits those two components over the substantial distance toward the subcircuit. The subcircuit then recombines the two components back into a single DDR data signal and supplies the DDR data signal and the DDR clock signal to the subcircuit. The DDR data signal and the DDR clock signal are supplied to the subcircuit in such a way that setup and hold time requirements of the subcircuit are met.

    Abstract translation: 集成电路接收DDR(双倍数据速率)数据信号和相关联的DDR时钟信号,并将来自集成电路输入端的那些信号跨越集成电路传送到子电路,然后接收并使用DDR数据。 在集成电路中,DDR重定时电路从端子接收DDR数据信号和相关的DDR时钟信号。 DDR重定时电路将DDR数据信号分为两个部分,然后将这两个分量在相当大的距离上发送到子电路。 子电路然后将两个组件重新组合成单​​个DDR数据信号,并将DDR数据信号和DDR时钟信号提供给子电路。 DDR数据信号和DDR时钟信号以满足子电路的建立和保持时间要求的方式提供给子电路。

    KICK-STARTED RUN-TO-COMPLETION PROCESSING METHOD THAT DOES NOT INVOLVE AN INSTRUCTION COUNTER
    105.
    发明申请
    KICK-STARTED RUN-TO-COMPLETION PROCESSING METHOD THAT DOES NOT INVOLVE AN INSTRUCTION COUNTER 审中-公开
    踢起来的运行完成处理方法不涉及指令计数器

    公开(公告)号:US20150317162A1

    公开(公告)日:2015-11-05

    申请号:US14267329

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    PICOENGINE MULTI-PROCESSOR WITH POWER CONTROL MANAGEMENT
    106.
    发明申请
    PICOENGINE MULTI-PROCESSOR WITH POWER CONTROL MANAGEMENT 有权
    具有电源控制管理的PICOENGINE多处理器

    公开(公告)号:US20150293578A1

    公开(公告)日:2015-10-15

    申请号:US14251599

    申请日:2014-04-12

    Inventor: Gavin J. Stark

    Abstract: A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.

    Abstract translation: 通用PicoEngine多处理器(PEMP)包括一个分层组织的小型专用微型引擎处理器和相关存储器的池。 数据输入值流被接收到PEMP上。 每个输入数据值被表征,并且从表征确定任务。 Picoengines按顺序选择。 当序列中的下一个微型引擎可用时,然后给出输入数据值以及相关的任务分配。 picoengine然后执行任务。 输出微型引擎选择器以相同的顺序选择微型引线。 如果下一个微微引擎指示它已经完成其分配的任务,则从PEMP输出所选择的微微引擎的输出值。 通过改变所使用的顺序,或多或少地将该池的处理能力和存储器资源承担在输入数据流上。 PEMP自动禁用未使用的打印机和内存。

    CHAINED-INSTRUCTION DISPATCHER
    107.
    发明申请
    CHAINED-INSTRUCTION DISPATCHER 审中-公开
    指导指导者

    公开(公告)号:US20150277924A1

    公开(公告)日:2015-10-01

    申请号:US14231028

    申请日:2014-03-31

    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.

    Abstract translation: 调度器电路从指导实体接收指令集。 将第一类型的集合的指令放入第一队列电路中,将第二类型的集合的指令放入第二队列电路中,等等。 第一个队列电路将第一类型的指令分配给一个或多个处理引擎,并且当该组的指令完成时记录。 当所述第一类型的所有指令已经完成时,第一队列电路向第二队列电路发送去信号,这使得第二队列电路调度第二类型的指令并记录它们何时完成 。 该过程从队列电路进行到队列电路。 当集合的所有指令已经完成时,调度器电路向原始指令实体返回“完成指令”。

    TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET
    108.
    发明申请
    TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET 有权
    如果在可交易存储器写命令中设置数据位,则可执行可编程地址转换的可交互存储器

    公开(公告)号:US20150220445A1

    公开(公告)日:2015-08-06

    申请号:US14172856

    申请日:2014-02-04

    Abstract: A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command.

    Abstract translation: 事务存储器接收命令,其中命令包括地址和小写的DAT(地址转换)位。 如果DAT位被设置,并且事务存储器被使能以进行地址转换,并且如果该命令用于访问(读或写)事务存储器的存储器,则事务存储器对该地址执行地址转换操作 的命令。 地址转换的参数是可编程的,并在接收到命令之前设置。 在一种配置中,输入地址的某些比特被删除,并且其他比特在比特位置被移位,并且基地址被加入,并且添加一个填充比特,从而生成翻译的地址。 然后,所得到的转换地址用于访问事务存储器的存储器以执行命令。

    Transactional memory that performs a split 32-bit lookup operation
    109.
    发明授权
    Transactional memory that performs a split 32-bit lookup operation 有权
    执行拆分32位查找操作的事务内存

    公开(公告)号:US09098353B2

    公开(公告)日:2015-08-04

    申请号:US13675259

    申请日:2012-11-13

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/526 G06F9/34 G06F9/467 G06F13/00 G06F17/30

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address, a starting bit position, and a mask size. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs) and multiple threshold values (TVs) from memory. A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). The multiple TVs define multiple lookup key ranges. The TM determines which lookup key range includes the LKV. A RV is selected based upon the lookup key range determined to include the LKV. The lookup key range is determined by a lookup key range identifier circuit. The selected RV is selected by a result value selection circuit.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括存储器地址,起始位位置和掩码大小。 响应该命令,TM拉动输入值(IV)。 存储器地址用于从存储器读取包含多个结果值(RV)和多个阈值(TV)的单词。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV的部分是查询键值(LKV)。 多台电视定义了多个查找键范围。 TM确定哪个查找键范围包括LKV。 基于确定为包括LKV的查找关键字范围来选择RV。 查找键范围由查找键范围标识符电路确定。 所选择的RV由结果值选择电路选择。

    COMMAND-DRIVEN NFA HARDWARE ENGINE THAT ENCODES MULTIPLE AUTOMATONS
    110.
    发明申请
    COMMAND-DRIVEN NFA HARDWARE ENGINE THAT ENCODES MULTIPLE AUTOMATONS 有权
    命令驱动的NFA硬件引擎,编写多台自动机

    公开(公告)号:US20150193484A1

    公开(公告)日:2015-07-09

    申请号:US14151666

    申请日:2014-01-09

    CPC classification number: H04L12/6418 G06F9/4498 G06F17/30283 G11C15/00

    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.

    Abstract translation: 自动机硬件引擎采用组织成2n行的转换表,其中每行包括多个n位存储位置,并且其中每个存储位置最多可以存储一个n位输入值。 每行对应于自动机状态。 在一个示例中,至少两个NFA被编码到表中。 第一个NFA以第一种方式索引到转换表的行中,第二个NFA以第二种方式索引到转换表的行中。 由于此索引,所有行都可用于存储指向其他行的条目值。

Patent Agency Ranking