GLOBAL RANDOM EARLY DETECTION PACKET DROPPING BASED ON AVAILABLE MEMORY
    101.
    发明申请
    GLOBAL RANDOM EARLY DETECTION PACKET DROPPING BASED ON AVAILABLE MEMORY 有权
    基于可用内存的全球随机早期检测包

    公开(公告)号:US20160099880A1

    公开(公告)日:2016-04-07

    申请号:US14507621

    申请日:2014-10-06

    CPC classification number: H04L49/9084

    Abstract: An apparatus and method for receiving a packet descriptor and a queue number that indicates a queue stored within a memory unit, determining a first amount of free memory in a group of packet descriptor queues, determining if the first amount of free memory is within a first range, applying a first drop probability to determine if the packet associated with the packet descriptor should be dropped when the first amount of free memory is within the first range, and applying a second drop probability to determine if the packet should be dropped when the first amount of free memory is within a second range. When it is determined that the packet is to be dropped, the packet descriptor is not stored in the queue. When it is determined that the packet is not to be dropped, the packet descriptor is stored in the queue.

    Abstract translation: 一种用于接收分组描述符和指示存储在存储器单元中的队列的队列号的装置和方法,确定一组分组描述符队列中的第一空闲存储器量,确定第一量的可用存储器是否在第一 范围,应用第一丢弃概率来确定当所述第一空闲内存量在所述第一范围内时是否应该丢弃与所述分组描述符相关联的分组,以及应用第二丢弃概率来确定当所述第一丢弃概率是否在所述第一 可用内存量在第二范围内。 当确定要丢弃分组时,分组描述符不存储在队列中。 当确定分组不被丢弃时,分组描述符被存储在队列中。

    Credit-based resource allocator circuit
    102.
    发明授权
    Credit-based resource allocator circuit 有权
    信用资源分配器电路

    公开(公告)号:US09282051B2

    公开(公告)日:2016-03-08

    申请号:US13928235

    申请日:2013-06-26

    CPC classification number: H04L47/39 H04L47/822

    Abstract: A high-speed credit-based allocator circuit receives an allocation request to make an allocation to one of a set of a processing entities. The allocator circuit maintains a chain of bubble sorting module circuits for the set, where each bubble sorting module circuit stores a resource value and an indication of a corresponding processing entity. A bubble sorting operation is performed so that the head of the chain tends to indicate the processing entity of the set that has the highest amount of the resource (credit) available. The allocation requested is made to the processing entity indicated by the head module circuit of the chain. The amount of the resource available to each processing entity is tracked by adjusting the resource values as allocations are made, and as allocated tasks are completed. The allocator circuit is configurable to maintain multiple chains, thereby supporting credit-based allocations to multiple sets of processing entities.

    Abstract translation: 高速信用分配器电路接收分配请求以对一组处理实体之一进行分配。 分配器电路为该组保持一连串的气泡分类模块电路,其中每个气泡分选模块电路存储资源值和对应的处理实体的指示。 执行气泡排序操作,使得链的头倾向于指示具有最高资源量(信用)可用的集合的处理实体。 所请求的分配是由链的头模块电路指示的处理实体。 每个处理实体可用的资源量通过调整资源值进行跟踪,当分配完成时,并且分配的任务完成。 分配器电路可配置为维护多个链,从而支持基于信用的分配给多组处理实体。

    Inverse PCP flow remapping for PFC pause frame generation
    103.
    发明授权
    Inverse PCP flow remapping for PFC pause frame generation 有权
    用于PFC暂停帧生成的逆PCP流重映射

    公开(公告)号:US09258256B2

    公开(公告)日:2016-02-09

    申请号:US14321762

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L49/3045 H04L49/9005 H04L49/9042 Y02P80/112

    Abstract: An overflow threshold value is stored for each of a plurality of virtual channels. A link manager maintains, for each virtual channel, a buffer count. If the buffer count for a virtual channel is detected to exceed the overflow threshold value for a virtual channel whose originating PCP flows were merged, then a PFC (Priority Flow Control) pause frame is generated where multiple ones of the priority class enable bits are set to indicate that multiple PCP flows should be paused. For the particular virtual channel that is overloaded, an Inverse PCP Remap LUT (IPRLUT) circuit performs inverse PCP mapping, including merging and/or reordering mapping, and outputs an indication of each of those PCP flows that is associated with the overloaded virtual channel. Associated physical MAC port circuitry uses this information to generate the PFC pause frame so that the appropriate multiple enable bits are set in the pause frame.

    Abstract translation: 为多个虚拟通道中的每一个存储溢出阈值。 链路管理器为每个虚拟通道维护缓冲区计数。 如果检测到虚拟通道的缓冲器计数超过其始发PCP流合并的虚拟通道的溢出阈值,则生成PFC(优先级流控制)暂停帧,其中设置了多个优先级使能位 以指示应暂停多个PCP流。 对于重载的特定虚拟信道,反PCP重映射LUT(IPRLUT)电路执行反PCP映射,包括合并和/或重新排序映射,并且输出与重载虚拟信道相关联的每个PCP流的指示。 相关的物理MAC端口电路使用该信息来生成PFC暂停帧,使得在暂停帧中设置适当的多个使能位。

    EFFICIENT SEARCH KEY PROCESSING METHOD
    104.
    发明申请
    EFFICIENT SEARCH KEY PROCESSING METHOD 有权
    有效搜索关键处理方法

    公开(公告)号:US20160012148A1

    公开(公告)日:2016-01-14

    申请号:US14326388

    申请日:2014-07-08

    Inventor: Rick Bouley

    CPC classification number: G06F13/28 G06F13/1663

    Abstract: An efficient search key processing method includes writing a first and a second search key data set to a memory, where the search key data sets are written to memory on a word by word basis. Each of the first and second search key data sets includes a header indicating a common lookup operation to be performed and a string of search keys. The header is immediately followed in memory by a search key. The search keys are located contiguously in the memory. At least one word contains search keys from the first and second search key data sets. The memory is read word by word. A first plurality of lookup command messages are sent based on the search keys included in the first search key data set. A second plurality of lookup command messages are sent based on the search keys included in the second search key data set.

    Abstract translation: 一种高效的搜索关键字处理方法包括将第一和第二搜索关键字数据集写入存储器,其中搜索关键字数据集逐字地写入存储器。 第一和第二搜索关键字数据集中的每一个包括指示要执行的公共查找操作的标题和搜索关键字串。 标题通过搜索键立即在内存中。 搜索键位于内存中。 至少一个字包含来自第一和第二搜索关键字数据集的搜索键。 内存是逐字读取的。 基于包括在第一搜索关键字数据集中的搜索关键字发送第一多个查找命令消息。 基于包括在第二搜索关键字数据集中的搜索关键字发送第二多个查找命令消息。

    MERGING PCP FLOWS AS THEY ARE ASSIGNED TO A SINGLE VIRTUAL CHANNEL
    105.
    发明申请
    MERGING PCP FLOWS AS THEY ARE ASSIGNED TO A SINGLE VIRTUAL CHANNEL 有权
    合并PCP流程,因为它们被分配给单个虚拟通道

    公开(公告)号:US20160006579A1

    公开(公告)日:2016-01-07

    申请号:US14321732

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/4625 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a first plurality of physical MAC ports, one or more PCP (Priority Code Point) flows. The NFP also maintains, for each of a second plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the port enqueue engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each port enqueue engine has a lookup table circuit that is configurable to cause multiple PCP flows to be merged so that the frame data for the multiple flows is all assigned to the same one virtual channel. Due to the PCP flow merging, the second number can be smaller than the first number multiplied by eight.

    Abstract translation: 网络流处理器(NFP)集成电路经由第一多个物理MAC端口中的每一个接收一个或多个PCP(优先权码点)流。 对于第二多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,端口入队引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个端口入队引擎具有可配置为使多个PCP流合并的查找表电路,使得多个流的帧数据都被分配给相同的一个虚拟通道。 由于PCP流合并,第二个数字可以小于第一个数乘以8。

    DDR retiming circuit
    106.
    发明授权
    DDR retiming circuit 有权
    DDR重定时电路

    公开(公告)号:US09208844B1

    公开(公告)日:2015-12-08

    申请号:US14448841

    申请日:2014-07-31

    CPC classification number: G11C11/4093 G11C7/1084 G11C7/1093 G11C7/222

    Abstract: An integrated circuit receives a DDR (Double Data Rate) data signal and an associated DDR clock signal, and communicates those signals from integrated circuit input terminals a substantial distance across the integrated circuit to a subcircuit that then receives and uses the DDR data. Within the integrated circuit, a DDR retiming circuit receives the DDR data signal and the associated DDR clock signal from the terminals. The DDR retiming circuit splits the DDR data signal into two components, and then transmits those two components over the substantial distance toward the subcircuit. The subcircuit then recombines the two components back into a single DDR data signal and supplies the DDR data signal and the DDR clock signal to the subcircuit. The DDR data signal and the DDR clock signal are supplied to the subcircuit in such a way that setup and hold time requirements of the subcircuit are met.

    Abstract translation: 集成电路接收DDR(双倍数据速率)数据信号和相关联的DDR时钟信号,并将来自集成电路输入端的那些信号跨越集成电路传送到子电路,然后接收并使用DDR数据。 在集成电路中,DDR重定时电路从端子接收DDR数据信号和相关的DDR时钟信号。 DDR重定时电路将DDR数据信号分为两个部分,然后将这两个分量在相当大的距离上发送到子电路。 子电路然后将两个组件重新组合成单​​个DDR数据信号,并将DDR数据信号和DDR时钟信号提供给子电路。 DDR数据信号和DDR时钟信号以满足子电路的建立和保持时间要求的方式提供给子电路。

    KICK-STARTED RUN-TO-COMPLETION PROCESSING METHOD THAT DOES NOT INVOLVE AN INSTRUCTION COUNTER
    107.
    发明申请
    KICK-STARTED RUN-TO-COMPLETION PROCESSING METHOD THAT DOES NOT INVOLVE AN INSTRUCTION COUNTER 审中-公开
    踢起来的运行完成处理方法不涉及指令计数器

    公开(公告)号:US20150317162A1

    公开(公告)日:2015-11-05

    申请号:US14267329

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    PICOENGINE MULTI-PROCESSOR WITH POWER CONTROL MANAGEMENT
    108.
    发明申请
    PICOENGINE MULTI-PROCESSOR WITH POWER CONTROL MANAGEMENT 有权
    具有电源控制管理的PICOENGINE多处理器

    公开(公告)号:US20150293578A1

    公开(公告)日:2015-10-15

    申请号:US14251599

    申请日:2014-04-12

    Inventor: Gavin J. Stark

    Abstract: A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.

    Abstract translation: 通用PicoEngine多处理器(PEMP)包括一个分层组织的小型专用微型引擎处理器和相关存储器的池。 数据输入值流被接收到PEMP上。 每个输入数据值被表征,并且从表征确定任务。 Picoengines按顺序选择。 当序列中的下一个微型引擎可用时,然后给出输入数据值以及相关的任务分配。 picoengine然后执行任务。 输出微型引擎选择器以相同的顺序选择微型引线。 如果下一个微微引擎指示它已经完成其分配的任务,则从PEMP输出所选择的微微引擎的输出值。 通过改变所使用的顺序,或多或少地将该池的处理能力和存储器资源承担在输入数据流上。 PEMP自动禁用未使用的打印机和内存。

    CHAINED-INSTRUCTION DISPATCHER
    109.
    发明申请
    CHAINED-INSTRUCTION DISPATCHER 审中-公开
    指导指导者

    公开(公告)号:US20150277924A1

    公开(公告)日:2015-10-01

    申请号:US14231028

    申请日:2014-03-31

    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.

    Abstract translation: 调度器电路从指导实体接收指令集。 将第一类型的集合的指令放入第一队列电路中,将第二类型的集合的指令放入第二队列电路中,等等。 第一个队列电路将第一类型的指令分配给一个或多个处理引擎,并且当该组的指令完成时记录。 当所述第一类型的所有指令已经完成时,第一队列电路向第二队列电路发送去信号,这使得第二队列电路调度第二类型的指令并记录它们何时完成 。 该过程从队列电路进行到队列电路。 当集合的所有指令已经完成时,调度器电路向原始指令实体返回“完成指令”。

    TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET
    110.
    发明申请
    TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET 有权
    如果在可交易存储器写命令中设置数据位,则可执行可编程地址转换的可交互存储器

    公开(公告)号:US20150220445A1

    公开(公告)日:2015-08-06

    申请号:US14172856

    申请日:2014-02-04

    Abstract: A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command.

    Abstract translation: 事务存储器接收命令,其中命令包括地址和小写的DAT(地址转换)位。 如果DAT位被设置,并且事务存储器被使能以进行地址转换,并且如果该命令用于访问(读或写)事务存储器的存储器,则事务存储器对该地址执行地址转换操作 的命令。 地址转换的参数是可编程的,并在接收到命令之前设置。 在一种配置中,输入地址的某些比特被删除,并且其他比特在比特位置被移位,并且基地址被加入,并且添加一个填充比特,从而生成翻译的地址。 然后,所得到的转换地址用于访问事务存储器的存储器以执行命令。

Patent Agency Ranking