Configurable Mesh Data Bus In An Island-Based Network Flow Processor
    71.
    发明申请
    Configurable Mesh Data Bus In An Island-Based Network Flow Processor 有权
    基于岛屿的网络流处理器中的可配置网状数据总线

    公开(公告)号:US20130219103A1

    公开(公告)日:2013-08-22

    申请号:US13399324

    申请日:2012-02-17

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. A configurable mesh data bus includes a command mesh, a pull-id mesh, and two data meshes. The configurable mesh data bus extends through all the islands. For each mesh, each island includes a centrally located crossbar switch and eight half links. Two half links extend to ports on the top edge of the island, a half link extends to a port on a right edge of the island, two half links extend to ports on the bottom edge of the island, and a half link extents to a port on the left edge of the island. Two additional links extend to functional circuitry of the island. The configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行排列的矩形岛。 可配置的网格数据总线包括命令网格,拉式网格和两个数据网格。 可配置的网格数据总线延伸穿过所有岛。 对于每个网格,每个岛包括一个位于中心的交叉开关和八个半连接。 两个半连杆延伸到岛的顶部边缘上的端口,半连接延伸到岛的右边缘上的端口,两个半连接延伸到岛的底部边缘上的端口,并且半连接范围到 港口在岛的左边缘。 两个额外的链路延伸到岛的功能电路。 可配置的网格数据总线可配置成形成一个命令/推/拉数据总线,多个事务可以同时发生在集成电路的不同部分上。

    System and Method for Processing Secure Transmissions
    72.
    发明申请
    System and Method for Processing Secure Transmissions 审中-公开
    用于处理安全传输的系统和方法

    公开(公告)号:US20120131330A1

    公开(公告)日:2012-05-24

    申请号:US13361559

    申请日:2012-01-30

    CPC classification number: H04L63/0464 H04L63/1408

    Abstract: Secured transmissions between a client and a server are detected, a policy formulated whether encrypted material needs to be decrypted, and if content is to be decrypted it is, using decrypting information obtained from the client and server. Resulting plain test is then deployed to an entity such as a processor, store or interface. The plain text can be checked or modified. The transmission between client and server could be blocked, delivered without being decrypted, decrypted and then re-encrypted with or without modification. Each transmission is given an ID and a policy tag.

    Abstract translation: 检测到客户端和服务器之间的安全传输,制定是否需要解密加密材料的策略,以及如果要解密内容,则使用从客户端和服务器获得的解密信息。 然后将所得到的纯测试部署到诸如处理器,存储或接口的实体。 可以检查或修改纯文本。 客户端和服务器之间的传输可以被阻塞,传递而不被解密,解密,然后在被修改的情况下被重新加密。 每个传输都有一个ID和一个策略标签。

    ATOMIC COMPARE AND WRITE MEMORY
    73.
    发明申请
    ATOMIC COMPARE AND WRITE MEMORY 有权
    原子比较和写入存储器

    公开(公告)号:US20110093663A1

    公开(公告)日:2011-04-21

    申请号:US12579649

    申请日:2009-10-15

    CPC classification number: G06F9/30021 G06F9/30043 G06F9/526 G06F2209/521

    Abstract: A microcontroller system may include a microcontroller having a processor and a first memory, a memory bus and a second memory in communication with the microcontroller via the memory bus. The first memory may include instructions for accessing a first data set from a contiguous memory block in the second memory. The first data set may include a first word having a first value and a plurality of first other words. The first memory may include instructions for receiving a write instruction including a second data set to be written to the contiguous memory block. The first memory may include instructions for determining whether the first value equals the second value. If so, the first memory may include instructions for writing the second data set to the contiguous memory block and updating the first value.

    Abstract translation: 微控制器系统可以包括具有经由存储器总线与微控制器通信的处理器和第一存储器,存储器总线和第二存储器的微控制器。 第一存储器可以包括用于从第二存储器中的连续存储器块访问第一数据集的指令。 第一数据集可以包括具有第一值和多个第一其他单词的第一字。 第一存储器可以包括用于接收包括要写入连续存储器块的第二数据组的写指令的指令。 第一存储器可以包括用于确定第一值是否等于第二值的指令。 如果是这样,则第一存储器可以包括用于将第二数据集写入连续存储器块并更新第一值的指令。

    High-speed and memory-efficient flow cache for network flow processors

    公开(公告)号:US10671530B1

    公开(公告)日:2020-06-02

    申请号:US16252406

    申请日:2019-01-18

    Inventor: Edwin S. Peer

    Abstract: The flow cache of a network flow processor (NFP) stores flow lookup information in cache lines. Some cache lines are stored in external bulk memory and others are cached in cache memory on the NFP. A cache line includes several lock/hash entry slots. Each slot can store a CAM entry hash value, associated exclusive lock status, and associated shared lock status. The head of a linked list of keys associated with the first slot is implicitly pointed to. For the other lock/entry slots, the cache line stores a head pointer that explicitly points to the head. Due to this architecture, multiple threads can simultaneously process packets of the same flow, obtain lookup information, and update statistics in a fast and memory-efficient manner. Flow entries can be added and deleted while the flow cache is handling packets without the recording of erroneous statistics and timestamp information.

    Multiprocessor system having efficient and shared atomic metering resource

    公开(公告)号:US10366019B1

    公开(公告)日:2019-07-30

    申请号:US15256590

    申请日:2016-09-04

    Inventor: Gavin J. Stark

    Abstract: A multiprocessor system includes several processors, a Shared Local Memory (SLMEM) that stores instructions and data, a system interface block, a posted transaction interface block, and an atomics block. Each processor is coupled to the system interface block via its AHB-S bus. The posted transaction interface block and the atomics block are shared resources that a processor can use via the same system interface block. A processor causes the atomics block to perform an atomic metering operation by doing an AHB-S write to a particular address in shared address space. The system interface block translates information from the AHB-S write into an atomics command, which in turn is converted into pipeline opcodes that cause a pipeline within the atomics block to perform the operation. An atomics response communicates result information which is stored into the system interface block. The processor reads the result information by reading from the same address.

    High-speed and memory-efficient flow cache for network flow processors

    公开(公告)号:US10204046B1

    公开(公告)日:2019-02-12

    申请号:US15356562

    申请日:2016-11-19

    Inventor: Edwin S. Peer

    Abstract: The flow cache of a network flow processor (NFP) stores flow lookup information in cache lines. Some cache lines are stored in external bulk memory and others are cached in cache memory on the NFP. A cache line includes several lock/hash entry slots. Each slot can store a CAM entry hash value, associated exclusive lock status, and associated shared lock status. The head of a linked list of keys associated with the first slot is implicitly pointed to. For the other lock/entry slots, the cache line stores a head pointer that explicitly points to the head. Due to this architecture, multiple threads can simultaneously process packets of the same flow, obtain lookup information, and update statistics in a fast and memory-efficient manner. Flow entries can be added and deleted while the flow cache is handling packets without the recording of erroneous statistics and timestamp information.

    Addressless merge command with data item identifier

    公开(公告)号:US10146468B2

    公开(公告)日:2018-12-04

    申请号:US14492013

    申请日:2014-09-20

    Abstract: An addressless merge command includes an identifier of an item of data, and a reference value, but no address. A first part of the item is stored in a first place. A second part is stored in a second place. To move the first part so that the first and second parts are merged, the command is sent across a bus to a device. The device translates the identifier into a first address ADR1, and uses ADR1 to read the first part. Stored in or with the first part is a second address ADR2 indicating where the second part is stored. The device extracts ADR2, and uses ADR1 and ADR2 to issue bus commands. Each bus command causes a piece of the first part to be moved. When the entire first part has been moved, the device returns the reference value to indicate that the merge command has been completed.

    Low cost multi-server array architecture

    公开(公告)号:US10034070B1

    公开(公告)日:2018-07-24

    申请号:US14846797

    申请日:2015-09-06

    Inventor: J. Niel Viljoen

    Abstract: An array of columns and rows of host server devices is mounted in a row of racks. Each device has a host processor and an exact-match packet switching integrated circuit. Packets are switched within the system using exact-match flow tables that are provisioned by a central controller. Each device is coupled by a first cable to a device to its left, by a second cable to a device to its right, by a third cable to a device above, and by a fourth cable to a device below. In one example, substantially all cables that are one meter or less in length are non-optical cables, whereas substantially all cables that are seven meters or more in length are optical cables. Advantageously, each device of a majority of the devices has four and only four cable ports, and connects only to non-optical cables, and the connections involve no optical transceiver.

Patent Agency Ranking