Global Event Chain In An Island-Based Network Flow Processor
    121.
    发明申请
    Global Event Chain In An Island-Based Network Flow Processor 有权
    基于岛屿网络流处理器的全球事件链

    公开(公告)号:US20130219092A1

    公开(公告)日:2013-08-22

    申请号:US13399983

    申请日:2012-02-17

    CPC classification number: G06F13/00 G06F13/4022

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form one or more local event rings and a global event chain. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. Each local event ring involves event ring circuits and event ring segments. In one example, an event packet being communicated along a local event ring reaches an event ring circuit. The event ring circuit examines the event packet and determines whether it meets a programmable criterion. If the event packet meets the criterion, then the event packet is inserted into the global event chain. The global event chain communicates the event packet to a global event manager that logs events and maintains statistics and other information.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行组织的岛屿。 可配置的mesh事件总线延伸穿过岛,并被配置为形成一个或多个本地事件环和全局事件链。 配置的mesh事件总线配置有通过可配置的网状控制总线接收的配置信息。 每个本地事件环包括事件环电路和事件环段。 在一个示例中,沿着本地事件环传送的事件分组到达事件环电路。 事件环电路检查事件数据包,并确定它是否符合可编程标准。 如果事件包满足标准,则将事件数据包插入到全局事件链中。 全局事件链将事件数据包传送给记录事件并维护统计信息和其他信息的全局事件管理器。

    Island-Based Network Flow Processor Integrated Circuit
    122.
    发明申请
    Island-Based Network Flow Processor Integrated Circuit 有权
    基于岛屿的网络流量处理器集成电路

    公开(公告)号:US20130219091A1

    公开(公告)日:2013-08-22

    申请号:US13399888

    申请日:2012-02-17

    CPC classification number: H04L45/50 G06F15/7867 Y10T29/49124

    Abstract: A reconfigurable, scalable and flexible island-based network flow processor integrated circuit architecture includes a plurality of rectangular islands of identical shape and size. The islands are disposed in rows, and a configurable mesh command/push/pull data bus extends through all the islands. The integrated circuit includes first SerDes I/O blocks, an ingress MAC island that converts incoming symbols into packets, an ingress NBI island that analyzes packets and generates ingress packet descriptors, a microengine (ME) island that receives ingress packet descriptors and headers from the ingress NBI and analyzes the headers, a memory unit (MU) island that receives payloads from the ingress NBI and performs lookup operations and stores payloads, an egress NBI island that receives the header portions and the payload portions and egress descriptors and performs egress scheduling, and an egress MAC island that outputs packets to second SerDes I/O blocks.

    Abstract translation: 可重构,可扩展和灵活的基于岛的网络流处理器集成电路架构包括多个相同形状和大小的矩形岛。 岛排列成行,并且可配置的网格命令/推/拉数据总线延伸穿过所有岛。 该集成电路包括第一个SerDes I / O块,一个将输入符号转换成数据包的入口MAC岛,一个分析数据包并产生入口包描述符的入口NBI岛,一个微型引擎(ME)岛,接收入口数据包描述符和头 入口NBI并分析头部,存储单元(MU)岛,其从入口NBI接收有效载荷并执行查找操作并存储有效载荷;接收标题部分和有效载荷部分和出口描述符并执行出口调度的出口NBI岛, 以及向第二SerDes I / O块输出数据包的出口MAC岛。

    Flow Control Using a Local Event Ring In An Island-Based Network Flow Processor
    123.
    发明申请
    Flow Control Using a Local Event Ring In An Island-Based Network Flow Processor 有权
    在基于岛屿的网络流处理器中使用本地事件环的流控制

    公开(公告)号:US20130215901A1

    公开(公告)日:2013-08-22

    申请号:US13400008

    申请日:2012-02-17

    CPC classification number: H04L49/9047 H04L47/13 H04L49/102 H04L49/9084

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form a local event ring. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. The local event ring involves event ring circuits and event ring segments. In one example, a packet is received onto a first island. If an amount of a processing resource (for example, memory buffer space) available to the first island is below a threshold, then an event packet is communicated from the first island to a second island via the local event ring. In response, the second island causes a third island to communicate via a command/push/pull data bus with the first island, thereby increasing the amount of the processing resource available to the first island for handing incoming packets.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行组织的岛屿。 可配置的网状事件总线延伸穿过岛,并配置为形成本地事件环。 配置的mesh事件总线配置有通过可配置的网状控制总线接收的配置信息。 本地事件环包括事件环电路和事件环段。 在一个示例中,分组被接收到第一岛上。 如果第一岛可用的处理资源(例如,存储器缓冲空间)的量低于阈值,则事件分组通过本地事件环从第一岛传送到第二岛。 作为响应,第二岛使得第三岛通过命令/推/拉数据总线与第一岛进行通信,从而增加第一岛可用于处理输入分组的处理资源的量。

    Distributed Credit FIFO Link of a Configurable Mesh Data Bus
    124.
    发明申请
    Distributed Credit FIFO Link of a Configurable Mesh Data Bus 有权
    可配置网状数据总线的分布式信用FIFO链路

    公开(公告)号:US20130215899A1

    公开(公告)日:2013-08-22

    申请号:US13399846

    申请日:2012-02-17

    CPC classification number: G06F13/4022 G06F13/00 H04L47/39 H04L49/901

    Abstract: An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.

    Abstract translation: 基于岛的集成电路包括可配置的网状数据总线。 数据总线包括四个网格。 每个网格对于每个岛包括一个交叉开关和辐射半连接。 相邻岛屿的半连接对齐以形成交叉开关之间的连接。 链接被实现为两个分布式信用FIFO。 在一个方向上,链接部分涉及与第一岛的输出端口,第一寄存器链和与第二岛的输入端口相关联的第二FIFO相关联的FIFO。 当交易值通过FIFO并通过第二岛的交叉开关时,交叉开关中的仲裁器返回一个取得的信号。 所采集的信号通过第二个寄存器链回到第一个岛的信用计数电路。 信用计数电路维持分配信用FIFO的信用计数值。

    Processing Resource Management In An Island-Based Network Flow Processor
    125.
    发明申请
    Processing Resource Management In An Island-Based Network Flow Processor 有权
    在基于岛屿的网络流处理器中处理资源管理

    公开(公告)号:US20130215893A1

    公开(公告)日:2013-08-22

    申请号:US13399958

    申请日:2012-02-17

    CPC classification number: H04L12/6418

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit has a high performance processor island. The processor island has a processor and a tightly coupled memory. The integrated circuit also has another memory. The other memory may be internal or external memory. The header of an incoming packet is stored in the tightly coupled memory of the processor island. The payload is stored in the other memory. In one example, if the amount of a processing resource is below a threshold then the header is moved from the first island to the other memory before the header and payload are communicated to an egress island for outputting from the integrated circuit. If, however, the amount of the processing resource is not below the threshold then the header is moved directly from the processor island to the egress island and is combined with the payload there for outputting from the integrated circuit.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路具有高性能的处理器岛。 处理器岛具有处理器和紧耦合的存储器。 该集成电路还具有另一个存储器。 其他内存可能是内部或外部存储器。 输入分组的报头被存储在处理器岛的紧耦合存储器中。 有效载荷存储在另一个存储器中。 在一个示例中,如果处理资源的量低于阈值,则在首标和有效载荷被传送到出口岛以从集成电路输出之前,首标从第一岛移动到另一个存储器。 然而,如果处理资源的数量不低于阈值,则标题直接从处理器岛移动到出口岛,并且与有效载荷组合以从集成电路输出。

    Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

    公开(公告)号:US10917348B1

    公开(公告)日:2021-02-09

    申请号:US16358351

    申请日:2019-03-19

    Abstract: A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

    Table fetch processor instruction using table number to base address translation

    公开(公告)号:US10853074B2

    公开(公告)日:2020-12-01

    申请号:US14267342

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Virtio relay
    128.
    发明授权

    公开(公告)号:US10318334B2

    公开(公告)日:2019-06-11

    申请号:US15644636

    申请日:2017-07-07

    Abstract: A VIRTIO Relay Program allows packets to be transferred from a Network Interface Device (NID), across a PCIe bus to a host, and to a virtual machine executing on the host. Rather than an OvS switch subsystem of the host making packet switching decisions, switching rules are transferred to the NID and the NID makes packet switching decisions. Transfer of a packet from the NID to the host occurs across an SR-IOV compliant PCIe virtual function and into host memory. Transfer from that memory and into memory space of the virtual machine is a VIRTIO transfer. This relaying of the packet occurs in no more than two read/write transfers without the host making any packet steering decision based on any packet header. Packet counts/statistics for the switched flow are maintained by the OvS switch subsystem just as if it were the subsystem that had performed the packet switching.

    Using a neural network to determine how to direct a flow

    公开(公告)号:US10129135B1

    公开(公告)日:2018-11-13

    申请号:US14841719

    申请日:2015-09-01

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto an integrated circuit located in a first network device. The integrated circuit includes a neural network. The neural network analyzes the group of packets and in response outputs a neural network output value. The neural network output value is used to determine how the packets of the flow are to be output from a second network device. In one example, each packet of the flow output by the first network device is output along with a tag. The tag is indicative of the neural network output value. The second device uses the tag to determine which output port located on the second device is to be used to output each of the packets.

    Chained-instruction dispatcher
    130.
    发明授权

    公开(公告)号:US10031758B2

    公开(公告)日:2018-07-24

    申请号:US14231028

    申请日:2014-03-31

    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.

Patent Agency Ranking