Multiprocessor system having efficient and shared atomic metering resource

    公开(公告)号:US10366019B1

    公开(公告)日:2019-07-30

    申请号:US15256590

    申请日:2016-09-04

    Inventor: Gavin J. Stark

    Abstract: A multiprocessor system includes several processors, a Shared Local Memory (SLMEM) that stores instructions and data, a system interface block, a posted transaction interface block, and an atomics block. Each processor is coupled to the system interface block via its AHB-S bus. The posted transaction interface block and the atomics block are shared resources that a processor can use via the same system interface block. A processor causes the atomics block to perform an atomic metering operation by doing an AHB-S write to a particular address in shared address space. The system interface block translates information from the AHB-S write into an atomics command, which in turn is converted into pipeline opcodes that cause a pipeline within the atomics block to perform the operation. An atomics response communicates result information which is stored into the system interface block. The processor reads the result information by reading from the same address.

    Addressless merge command with data item identifier

    公开(公告)号:US10146468B2

    公开(公告)日:2018-12-04

    申请号:US14492013

    申请日:2014-09-20

    Abstract: An addressless merge command includes an identifier of an item of data, and a reference value, but no address. A first part of the item is stored in a first place. A second part is stored in a second place. To move the first part so that the first and second parts are merged, the command is sent across a bus to a device. The device translates the identifier into a first address ADR1, and uses ADR1 to read the first part. Stored in or with the first part is a second address ADR2 indicating where the second part is stored. The device extracts ADR2, and uses ADR1 and ADR2 to issue bus commands. Each bus command causes a piece of the first part to be moved. When the entire first part has been moved, the device returns the reference value to indicate that the merge command has been completed.

    Registered FIFO
    53.
    发明授权

    公开(公告)号:US09940097B1

    公开(公告)日:2018-04-10

    申请号:US14527550

    申请日:2014-10-29

    Abstract: A registered synchronous FIFO has a tail register, internal registers, and a head register. The FIFO cannot be pushed if it is full and cannot be popped if it is empty, but otherwise can be pushed and/or popped. Within the FIFO, the internal signal fanout of incoming data circuitry and push control circuitry and is minimized and remains essentially constant regardless of the number of registers of the FIFO. The output delay of the output data also is essentially constant regardless of the number of registers of the FIFO. An incoming data value can only be written into the head or tail. If a data value is in the tail and one of the internal registers is empty, and if no push or pop is to be performed in a clock cycle, then nevertheless the data value in the tail is moved into the empty internal register in the cycle.

    Inter-packet interval prediction learning algorithm

    公开(公告)号:US09900090B1

    公开(公告)日:2018-02-20

    申请号:US14690362

    申请日:2015-04-17

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Hash range lookup command
    55.
    发明授权

    公开(公告)号:US09866480B1

    公开(公告)日:2018-01-09

    申请号:US14927455

    申请日:2015-10-29

    CPC classification number: H04L45/7453 G06F3/0613 G06F3/0659 G06F3/067

    Abstract: A novel hash range lookup command is disclosed. In an exemplary embodiment, a method includes (a) providing access to a hash table that includes hash buckets having hash entry fields; (b) receiving a novel hash lookup command; (c) using the hash lookup command to determine hash command parameters, a hashed index value, and a flow key value; (d) using the hash command parameters and the hashed index value to generate hash values (addresses) to access entry fields in a selectable number of hash buckets; (e) comparing bits of the entry value in the entry field to bits of the flow key value; (f) repeating (d) through (e) until a match is determined or until the selectable number of hash buckets and entries have been accessed; and (g) returning either an address of the entry field containing the match or a result associated with the entry field containing the match.

    In-flight packet processing
    56.
    发明授权

    公开(公告)号:US09804959B2

    公开(公告)日:2017-10-31

    申请号:US14530599

    申请日:2014-10-31

    Abstract: A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.

    Making a flow ID for an exact-match flow table using a byte-wide multiplexer circuit

    公开(公告)号:US09756152B1

    公开(公告)日:2017-09-05

    申请号:US14726423

    申请日:2015-05-29

    CPC classification number: H04L69/22 H04L45/745 H04L47/2441

    Abstract: An exact-match flow table structure stores flow entries. Each flow entry includes a Flow Id and an action value. A flow entry is generated from an incoming packet. The flow table structure determines whether there is a stored flow entry, the Flow Id of which is an exact-match for generated Flow Id. In one novel aspect, a multiplexer circuit is used to generate Flow Ids. The multiplexer circuit includes a plurality of byte-wide multiplexer. Each respective one of the byte-wide multiplexers outputs a byte that is a corresponding respective byte of the Flow Id. The various inputs of the byte-wide multiplexers are coupled to receive various bytes of the incoming packet, various bytes of modified or compressed packet data, as well as bytes of metadata. By controlling select values supplied onto the select inputs of the multiplexer circuit, Flow Ids of different forms can be generated.

    Hardware first come first serve arbiter using multiple request buckets

    公开(公告)号:US09727499B2

    公开(公告)日:2017-08-08

    申请号:US14074469

    申请日:2013-11-07

    Inventor: Gavin J. Stark

    CPC classification number: G06F13/1663 G06F13/3625 G06F13/364

    Abstract: A First Come First Server (FCFS) arbiter that receives a request to utilize a shared resource from a plurality of devices and in response generates a grant value indicating if the request is granted. The FCFS arbiter includes a circuit and a storage device. The circuit receives a first request and a grant enable during a first clock cycle and outputs a grant value. The grant enable is received from a shared resource. The grant value communicated to the source of the first request. The storage device includes a plurality of request buckets. The first request is stored in a first request bucket when the first request is not granted during the first clock cycle and is moved from the first request bucket to a second request bucket when the first request is not granted during a second clock cycle. A granted request is cleared from all request buckets.

    Forwarding messages within a switch fabric of an SDN switch

    公开(公告)号:US09699084B1

    公开(公告)日:2017-07-04

    申请号:US14634847

    申请日:2015-03-01

    CPC classification number: H04L45/745 H04L45/54 H04L49/25 H04L49/35

    Abstract: A Software-Defined Networking (SDN) switch that includes external network ports for receiving external network traffic onto the SDN switch, external network ports for transmitting external network traffic out of the SDN switch, a Network Flow Switch (NFX) integrated circuit that has multiple network ports and that maintains a flow table, another NFX integrated circuit that has multiple network ports and that maintains a flow table, and a Network Flow Processor (NFP) circuit that maintains a flow table. The NFP circuit couples directly to a network port of the first NFX integrated circuit but does not couple directly to any network port of the second NFX integrated circuit. The NFP circuit sends a flow entry to one NFX integrated circuit along with an addressing label and the NFX integrated circuit uses the addressing label to determine that the flow entry is to be forwarded to the second NFX integrated circuit.

    Picoengine multi-processor with task assignment
    60.
    发明授权
    Picoengine multi-processor with task assignment 有权
    Picoengine多处理器与任务分配

    公开(公告)号:US09489337B2

    公开(公告)日:2016-11-08

    申请号:US14251592

    申请日:2014-04-12

    Inventor: Gavin J. Stark

    Abstract: A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.

    Abstract translation: 通用PicoEngine多处理器(PEMP)包括一个分层组织的小型专用微型引擎处理器和相关存储器的池。 数据输入值流被接收到PEMP上。 每个输入数据值被表征,并且从表征确定任务。 Picoengines按顺序选择。 当序列中的下一个微型引擎可用时,然后给出输入数据值以及相关的任务分配。 picoengine然后执行任务。 输出微型引擎选择器以相同的顺序选择微型引线。 如果下一个微微引擎指示它已经完成其分配的任务,则从PEMP输出所选择的微微引擎的输出值。 通过改变所使用的顺序,或多或少地将该池的处理能力和存储器资源承担在输入数据流上。 PEMP自动禁用未使用的打印机和内存。

Patent Agency Ranking