COMPARTMENTALIZATION OF THE USER NETWORK INTERFACE TO A DEVICE
    31.
    发明申请
    COMPARTMENTALIZATION OF THE USER NETWORK INTERFACE TO A DEVICE 有权
    用户网络接口对设备的分层化

    公开(公告)号:US20140201734A1

    公开(公告)日:2014-07-17

    申请号:US13742311

    申请日:2013-01-15

    Abstract: A device has physical network interface port through which a user can monitor and configure the device. A backend process and a virtual machine (VM) execute on a host operating system (OS). A front end user interface process executes on the VM, and is therefore compartmentalized in the VM. There is no front end user interface executing on the host OS outside the VM. The only management access channel into the device is via a first communication path through the physical network interface port, to the VM, up the VM's stack, and to the front end process. If the backend process is to be instructed to take an action, then the front end process forwards an application layer instruction to the backend process via a second communication path. The instruction passes down the VM stack, across a virtual secure network link, up the host stack, and to the backend process.

    Abstract translation: 设备具有物理网络接口端口,用户可以通过该端口监视和配置设备。 后台进程和虚拟机(VM)在主机操作系统(OS)上执行。 前端用户界面进程在VM上执行,因此在虚拟机中进行分区。 在VM外部的主机操作系统上没有执行前端用户界面。 设备中唯一的管理访问通道是通过物理网络接口端口,VM,VM堆栈以及前端进程的第一个通信路径。 如果要指示后端进程采取行动,则前端进程通过第二通信路径将应用层指令转发到后端进程。 该指令通过虚拟机堆栈,跨虚拟安全网络链接,主机堆栈以及后端进程传递。

    Staggered Island Structure In An Island-Based Network Flow Processor
    32.
    发明申请
    Staggered Island Structure In An Island-Based Network Flow Processor 有权
    基于岛屿网络流处理器的交错岛结构

    公开(公告)号:US20130219100A1

    公开(公告)日:2013-08-22

    申请号:US13399433

    申请日:2012-02-17

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. In one example, the configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit. The rectangular islands of one row are oriented in staggered relation with respect to the rectangular islands of the next row. The left and right edges of islands in a row align with left and right edges of islands two rows down in the row structure. The data bus involves multiple meshes. In each mesh, the island has a centrally located crossbar switch and six radiating half links, and half links down to functional circuitry of the island. The staggered orientation of the islands, and the structure of the half links, allows half links of adjacent islands to align with one another.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行排列的矩形岛。 在一个示例中,可配置的网格数据总线可配置成形成命令/推/拉数据总线,多个事务可以同时发生在集成电路的不同部分上。 一行的矩形岛相对于下一行的矩形岛定向成交错关系。 一行中的岛的左边缘和右边缘与行结构中两行向下的岛的左边缘和右边缘对齐。 数据总线涉及多个网格。 在每个网格中,岛具有位于中心的交叉开关和六个辐射半连接,并且一半连接到岛的功能电路。 岛屿的交错取向和半连接的结构允许相邻岛屿的一半链接彼此对齐。

    System and Method for Processing Secure Transmissions
    33.
    发明申请
    System and Method for Processing Secure Transmissions 审中-公开
    用于处理安全传输的系统和方法

    公开(公告)号:US20090201978A1

    公开(公告)日:2009-08-13

    申请号:US12064560

    申请日:2006-08-23

    CPC classification number: H04L63/0464 H04L63/1408

    Abstract: A method and apparatus for improving channel estimation within an OFDM communication system. Channel estimation in OFDM is usually performed with the aid of pilot symbols. The pilot symbols are typically spaced in time and frequency. The set of frequencies and times at which pilot symbols are inserted is referred to as a pilot pattern. In some cases, the pilot pattern is a diagonal-shaped lattice, either regular or irregular. The method first interpolates in the direction of larger coherence (time or frequency). Using these measurements, the density of pilot symbols in the direction of faster change will be increased thereby improving channel estimation without increasing overhead. As such, the results of the first interpolating step can then be used to assist the interpolation in the dimension of smaller coherence (time or frequency).

    Abstract translation: 一种用于改善OFDM通信系统内的信道估计的方法和装置。 OFDM中的信道估计通常借助于导频符号来执行。 导频符号通常在时间和频率上间隔开。 将导频符号插入的频率和时间的集合称为导频模式。 在一些情况下,导频图案是对角线形格子,规则的或不规则的。 该方法首先在较大相干性(时间或频率)的方向内插。 使用这些测量,导频符号在更快变化方向上的密度将增加,从而改善信道估计而不增加开销。 因此,第一内插步骤的结果然后可以用于辅助尺寸较小的相干性(时间或频率)的内插。

    Communicating a neural network feature vector (NNFV) to a host and receiving back a set of weight values for a neural network

    公开(公告)号:US12223418B1

    公开(公告)日:2025-02-11

    申请号:US14841722

    申请日:2015-09-01

    Abstract: A flow of packets is communicated through a data center. The data center includes multiple racks, where each rack includes multiple network devices. A group of packets of the flow is received onto a first network device. The first device includes a neural network. The first network device generates a neural network feature vector (NNFV) based on the received packets. The first network device then sends the NNFV to a second network device. The second device uses the NNFV to determine a set of weight values. The weight values are then sent back to the first network device. The first device loads the weight values into the neural network. The neural network, as configured by the weight values, then analyzes each of a plurality of flows received onto the first device to determine whether the flow likely has a particular characteristic.

    Configuration mesh data bus and transactional memories in a multi-processor integrated circuit

    公开(公告)号:US10911038B1

    公开(公告)日:2021-02-02

    申请号:US16247566

    申请日:2019-01-15

    Abstract: A network flow processor integrated circuit includes a plurality of processors, a plurality of multi-threaded transactional memories (MTMs), and a configurable mesh posted transaction data bus. The configurable mesh posted transaction data bus includes a configurable command mesh and a configurable data mesh. Each of these configurable meshes includes crossbar switches and interconnecting links. A command bus transaction value issued by a processor can pass across the command mesh to an MTM. The command bus transaction bus value includes a reference value. The MTM uses the reference value to pull data across the configurable data mesh into the MTM. The MTM then uses the data to carry out the commanded transactional memory operation. Multiple such commands can pass across the posted transaction bus across different parts of the integrated circuit at the same time, and a single MTM can be carrying out multiple such operations at the same time.

    Loading a flow tracking autolearning match table

    公开(公告)号:US10476747B1

    公开(公告)日:2019-11-12

    申请号:US14923460

    申请日:2015-10-27

    Abstract: A networking device includes: 1) a first processor that includes a match table, and 2) a second processor that includes both a Flow Tracking Autolearning Match Table (FTAMT) as well as a synchronized match table. A set of multiple entries stored in the synchronized match table is synchronized with a corresponding set of multiple entries in the match table on the first processor. The FTAMT, for a first packet of the flow, generates a Flow Identifier (ID) and stores the flow ID as part of a new entry for the flow. The match of a packet to one of the synchronized entries in the synchronized match table causes an action identifier to be recorded in the new entry in the FTAMT. A subsequent packet of the flow results in a hit in the FTAMT and results in the previously recorded action being applied to the subsequent packet.

    Efficient forwarding of encrypted TCP retransmissions

    公开(公告)号:US10419406B2

    公开(公告)日:2019-09-17

    申请号:US15860652

    申请日:2018-01-02

    Abstract: A network device receives TCP segments of a flow via a first SSL session and transmits TCP segments via a second SSL session. Once a TCP segment has been transmitted, the TCP payload need no longer be stored on the network device. Substantial memory resources are conserved, because the device may have to handle many retransmit TCP segments at a given time. If the device receives a retransmit segment, then the device regenerates the retransmit segment to be transmitted. A data structure of entries is stored, with each entry including a decrypt state and an encrypt state for an associated SSL byte position. The device uses the decrypt state to initialize a decrypt engine, decrypts an SSL payload of the retransmit TCP segment received, uses the encrypt state to initialize an encrypt engine, re-encrypts the SSL payload, and then incorporates the re-encrypted SSL payload into the regenerated retransmit TCP segment.

    Low-level programming language plugin to augment high-level programming language setup of an SDN switch

    公开(公告)号:US10419242B1

    公开(公告)日:2019-09-17

    申请号:US15894866

    申请日:2018-02-12

    Abstract: A method involves compiling a first amount of high-level programming language code (for example, P4) and a second amount of a low-level programming language code (for example, C) thereby obtaining a first amount of native code and a second amount of native code. The high-level programming language code at least in part defines how an SDN switch performs matching in a first condition. The low-level programming language code at least in part defines how the SDN switch performs matching in a second condition. The low-level code can be a type of plugin or patch for handling special packets. The amounts of native code are loaded into the SDN switch such that a first processor (for example, x86 of the host) executes the first amount of native code and such that a second processor (for example, ME of an NFP on the NIC) executes the second amount of native code.

    Update packet sequence number packet ready command

    公开(公告)号:US10341246B1

    公开(公告)日:2019-07-02

    申请号:US14530761

    申请日:2014-11-02

    Abstract: A method of performing an update packet sequence number packet ready command (drop packet mode operation) is described herein. A first packet ready command is received from a memory system via a bus and onto a first network interface circuit. The first packet ready command includes a multicast value. A first communication mode is determined as a function of the multicast value. The multicast value indicates a single packet was communicated by a second network interface circuit. A packet sequence number stored in a memory unit is updated. The memory unit is included in the first network interface circuit. The first network interface circuit does not free the first packet from the memory system. The network interface circuits and the memory system are included on an Island-Based Network Flow Processor. The bus is a Command/Push/Pull (CPP) bus.

    Network interface device that alerts a monitoring processor if configuration of a virtual NID is changed

    公开(公告)号:US10228968B2

    公开(公告)日:2019-03-12

    申请号:US15688937

    申请日:2017-08-29

    Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. For each virtual NID there is a block in a memory of a transactional memory on the NID. This block stores configuration information that configures the corresponding virtual NID. The NID also has a single managing processor that monitors configuration of the plurality of virtual NIDs. If there is a write into the memory space where the configuration information for the virtual NIDs is stored, then the transactional memory detects this write and in response sends an alert to the managing processor. The size and location of the memory space in the memory for which write alerts are to be generated is programmable. The content and destination of the alert is also programmable.

Patent Agency Ranking