COMMAND-DRIVEN NFA HARDWARE ENGINE THAT ENCODES MULTIPLE AUTOMATONS
    111.
    发明申请
    COMMAND-DRIVEN NFA HARDWARE ENGINE THAT ENCODES MULTIPLE AUTOMATONS 有权
    命令驱动的NFA硬件引擎,编写多台自动机

    公开(公告)号:US20150193484A1

    公开(公告)日:2015-07-09

    申请号:US14151666

    申请日:2014-01-09

    CPC classification number: H04L12/6418 G06F9/4498 G06F17/30283 G11C15/00

    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.

    Abstract translation: 自动机硬件引擎采用组织成2n行的转换表,其中每行包括多个n位存储位置,并且其中每个存储位置最多可以存储一个n位输入值。 每行对应于自动机状态。 在一个示例中,至少两个NFA被编码到表中。 第一个NFA以第一种方式索引到转换表的行中,第二个NFA以第二种方式索引到转换表的行中。 由于此索引,所有行都可用于存储指向其他行的条目值。

    NFA BYTE DETECTOR
    112.
    发明申请
    NFA BYTE DETECTOR 有权
    NFA字节检测器

    公开(公告)号:US20150193374A1

    公开(公告)日:2015-07-09

    申请号:US14151688

    申请日:2014-01-09

    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.

    Abstract translation: 自动机硬件引擎采用组织成2n行的转换表,其中每行包括多个n位存储位置,并且其中每个存储位置最多可以存储一个n位输入值。 每行对应于自动机状态。 在一个示例中,至少两个NFA被编码到表中。 第一个NFA以第一种方式索引到转换表的行中,第二个NFA以第二种方式索引到转换表的行中。 由于此索引,所有行都可用于存储指向其他行的条目值。

    HIERARCHICAL RESOURCE POOLS IN A LINKER
    113.
    发明申请
    HIERARCHICAL RESOURCE POOLS IN A LINKER 有权
    链接中的分层资源池

    公开(公告)号:US20150128118A1

    公开(公告)日:2015-05-07

    申请号:US14074623

    申请日:2013-11-07

    CPC classification number: G06F8/54 G06F8/45 G06F8/453

    Abstract: A novel declare instruction can be used in source code to declare a sub-pool of resource instances to be taken from the resource instances of a larger declared pool. Using such declare instructions, a hierarchy of pools and sub-pools can be declared. A novel allocate instruction can then be used in the source code to instruct a novel linker to make resource instance allocations from a desired pool or a desired sub-pool of the hierarchy. After compilation, the declare and allocate instructions appear in the object code. The linker uses the declare and allocate instructions in the object code to set up the hierarchy of pools and to make the indicated allocations of resource instances to symbols. After resource allocation, the linker replaces instances of a symbol in the object code with the address of the allocated resource instance, thereby generating executable code.

    Abstract translation: 源代码中可以使用一个新颖的声明指令来声明从更大的声明池的资源实例获取的资源实例的子池。 使用这样的声明指令,可以声明池和子池的层次结构。 然后可以在源代码中使用新颖的分配指令来指示新颖的链接器从期望的池或层次结构的期望子池进行资源实例分配。 在编译之后,声明和分配指令将出现在目标代码中。 链接器使用对象代码中的声明和分配指令来设置池的层次结构,并将资源实例的指定分配指定为符号。 资源分配后,链接器将使用分配的资源实例的地址替换对象代码中的符号实例,从而生成可执行代码。

    HARDWARE FIRST COME FIRST SERVE ARBITER USING MULTIPLE REQUEST BUCKETS
    114.
    发明申请
    HARDWARE FIRST COME FIRST SERVE ARBITER USING MULTIPLE REQUEST BUCKETS 有权
    硬件首先使用多个请求BUCKETS

    公开(公告)号:US20150127864A1

    公开(公告)日:2015-05-07

    申请号:US14074469

    申请日:2013-11-07

    Inventor: Gavin J. Stark

    CPC classification number: G06F13/1663 G06F13/3625 G06F13/364

    Abstract: A First Come First Server (FCFS) arbiter that receives a request to utilize a shared resource from a plurality of devices and in response generates a grant value indicating if the request is granted. The FCFS arbiter includes a circuit and a storage device. The circuit receives a first request and a grant enable during a first clock cycle and outputs a grant value. The grant enable is received from a shared resource. The grant value communicated to the source of the first request. The storage device includes a plurality of request buckets. The first request is stored in a first request bucket when the first request is not granted during the first clock cycle and is moved from the first request bucket to a second request bucket when the first request is not granted during a second clock cycle. A granted request is cleared from all request buckets.

    Abstract translation: 接收来自多个设备的利用共享资源的请求并作为响应的第一起始服务器(FCFS)仲裁器生成指示是否授予请求的授权值。 FCFS仲裁器包括电路和存储设备。 电路在第一时钟周期期间接收第一请求和授权使能,并输出许可值。 从共享资源接收授权启用。 授予价值传达给第一个请求的来源。 存储装置包括多个请求桶。 当在第一时钟周期期间不授予第一请求时,第一请求被存储在第一请求桶中,并且当在第二时钟周期期间未授予第一请求时,第一请求被从第一请求桶移动到第二请求桶。 授予的请求将从所有请求存储桶中清除。

    Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

    公开(公告)号:US10917348B1

    公开(公告)日:2021-02-09

    申请号:US16358351

    申请日:2019-03-19

    Abstract: A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

    Table fetch processor instruction using table number to base address translation

    公开(公告)号:US10853074B2

    公开(公告)日:2020-12-01

    申请号:US14267342

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Executing A Selected Sequence Of Instructions Depending On Packet Type In An Exact-Match Flow Switch

    公开(公告)号:US20180343198A1

    公开(公告)日:2018-11-29

    申请号:US16042339

    申请日:2018-07-23

    CPC classification number: H04L45/745 H04L69/22

    Abstract: An integrated circuit includes a processor and an exact-match flow table structure. A first packet is received onto the integrated circuit. The packet is determined to be of a first type. As a result of this determination, execution by the processor of a first sequence of instructions is initiated. This execution causes bits of the first packet to be concatenated and modified in a first way, thereby generating a first Flow Id. The first Flow Id is an exact-match for the Flow Id of a first stored flow entry. A second packet is received. It is of a first type. As a result, a second sequence of instructions is executed. This causes bits of the second packet to be concatenated and modified in a second way, thereby generating a second Flow Id. The second Flow Id is an exact-match for the Flow Id of a second stored flow entry.

    Using a neural network to determine how to direct a flow

    公开(公告)号:US10129135B1

    公开(公告)日:2018-11-13

    申请号:US14841719

    申请日:2015-09-01

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto an integrated circuit located in a first network device. The integrated circuit includes a neural network. The neural network analyzes the group of packets and in response outputs a neural network output value. The neural network output value is used to determine how the packets of the flow are to be output from a second network device. In one example, each packet of the flow output by the first network device is output along with a tag. The tag is indicative of the neural network output value. The second device uses the tag to determine which output port located on the second device is to be used to output each of the packets.

    Chained-instruction dispatcher
    119.
    发明授权

    公开(公告)号:US10031758B2

    公开(公告)日:2018-07-24

    申请号:US14231028

    申请日:2014-03-31

    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.

    Kick-started run-to-completion processing method that does not involve an instruction counter

    公开(公告)号:US10031755B2

    公开(公告)日:2018-07-24

    申请号:US14267329

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

Patent Agency Ranking