Compartmentalization of the user network interface to a device
    1.
    发明授权
    Compartmentalization of the user network interface to a device 有权
    将用户网络接口与设备隔开

    公开(公告)号:US08918868B2

    公开(公告)日:2014-12-23

    申请号:US13742311

    申请日:2013-01-15

    Abstract: A device has physical network interface port through which a user can monitor and configure the device. A backend process and a virtual machine (VM) execute on a host operating system (OS). A front end user interface process executes on the VM, and is therefore compartmentalized in the VM. There is no front end user interface executing on the host OS outside the VM. The only management access channel into the device is via a first communication path through the physical network interface port, to the VM, up the VM's stack, and to the front end process. If the backend process is to be instructed to take an action, then the front end process forwards an application layer instruction to the backend process via a second communication path. The instruction passes down the VM stack, across a virtual secure network link, up the host stack, and to the backend process.

    Abstract translation: 设备具有物理网络接口端口,用户可以通过该端口监视和配置设备。 后台进程和虚拟机(VM)在主机操作系统(OS)上执行。 前端用户界面进程在VM上执行,因此在虚拟机中进行分区。 在VM外部的主机操作系统上没有执行前端用户界面。 设备中唯一的管理访问通道是通过物理网络接口端口,VM,VM堆栈以及前端进程的第一个通信路径。 如果要指示后端进程采取行动,则前端进程通过第二通信路径将应用层指令转发到后端进程。 该指令通过虚拟机堆栈,跨虚拟安全网络链接,主机堆栈以及后端进程传递。

    Recursive lookup with a hardware trie structure that has no sequential logic elements
    2.
    发明授权
    Recursive lookup with a hardware trie structure that has no sequential logic elements 有权
    具有没有顺序逻辑元素的硬件trie结构的递归查找

    公开(公告)号:US08902902B2

    公开(公告)日:2014-12-02

    申请号:US13552555

    申请日:2012-07-18

    CPC classification number: H03K17/00 G06F9/467 G06F13/40 H04L45/745 H04L45/748

    Abstract: A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.

    Abstract translation: 硬件特里结构包括一棵内部节点电路和叶节点电路。 每个内部节点由相应的多位节点控制值(NCV)配置。 每个叶节点可以输出相应的结果值(RV)。 提供给特里的输入引线的输入值(IV)使得信号通过三通传播,使得一个叶节点将其中一个RV输出到该线索的输出引线。 在事务存储器中,存储器存储一组NCV和RV。 响应于查找命令,NCV和RV从存储器中读出并用于配置特里。 查询的IV被提供给输入引线,并且特技查找RV。 非最终RV以递归方式发起另一次查找,而作为查找命令的结果返回最终RV。

    Atomic compare and write memory
    3.
    发明授权
    Atomic compare and write memory 有权
    原子比较和写入内存

    公开(公告)号:US08793438B2

    公开(公告)日:2014-07-29

    申请号:US12579649

    申请日:2009-10-15

    CPC classification number: G06F9/30021 G06F9/30043 G06F9/526 G06F2209/521

    Abstract: A microcontroller system may include a microcontroller having a processor and a first memory, a memory bus and a second memory in communication with the microcontroller via the memory bus. The first memory may include instructions for accessing a first data set from a contiguous memory block in the second memory. The first data set may include a first word having a first value and a plurality of first other words. The first memory may include instructions for receiving a write instruction including a second data set to be written to the contiguous memory block. The first memory may include instructions for determining whether the first value equals the second value. If so, the first memory may include instructions for writing the second data set to the contiguous memory block and updating the first value.

    Abstract translation: 微控制器系统可以包括具有经由存储器总线与微控制器通信的处理器和第一存储器,存储器总线和第二存储器的微控制器。 第一存储器可以包括用于从第二存储器中的连续存储器块访问第一数据集的指令。 第一数据集可以包括具有第一值和多个第一其他单词的第一字。 第一存储器可以包括用于接收包括要写入连续存储器块的第二数据组的写指令的指令。 第一存储器可以包括用于确定第一值是否等于第二值的指令。 如果是这样,则第一存储器可以包括用于将第二数据集写入连续存储器块并更新第一值的指令。

    Transactional memory that performs an atomic metering command
    4.
    发明授权
    Transactional memory that performs an atomic metering command 有权
    执行原子计量命令的事务内存

    公开(公告)号:US08775686B2

    公开(公告)日:2014-07-08

    申请号:US13598533

    申请日:2012-08-29

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives an Atomic Metering Command (AMC) across a bus from a processor. The command includes a memory address and a meter pair indicator value. In response to the AMC, the TM pulls an input value (IV). The TM uses the memory address to read a word including multiple credit values from a memory unit. Circuitry within the TM selects a pair of credit values, subtracts the IV from each of the pair of credit values thereby generating a pair of decremented credit values, compares the pair of decremented credit values with a threshold value, respectively, thereby generating a pair of indicator values, performs a lookup based upon the pair of indicator values and the meter pair indicator value, and outputs a selector value and a result value that represents a meter color. The selector value determines the credit values written back to the memory unit.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的原子计量命令(AMC)。 该命令包括存储器地址和仪表对指示器值。 对于AMC,TM提取输入值(IV)。 TM使用存储器地址从存储器单元读取包括多个信用值的单词。 TM内的电路选择一对信用值,从该对信用值中的每一个中减去IV,从而生成一对递减的信用值,将一对递减的信用值与阈值进行比较,从而产生一对 指示符值,根据指示符值对和仪表对指示符值执行查找,并输出选择器值和表示仪表颜色的结果值。 选择器值确定写入存储单元的信用值。

    Software update methodology
    5.
    发明授权
    Software update methodology 有权
    软件更新方法

    公开(公告)号:US09098373B2

    公开(公告)日:2015-08-04

    申请号:US13741310

    申请日:2013-01-14

    CPC classification number: G06F8/65 H04L67/1095 H04L67/1097

    Abstract: Software update information is communicated to a network appliance either across a network or from a local memory device. The software update information includes kernel data, application data, or indicator data. The network appliance includes a first storage device, a second storage device, an operating memory, a central processing unit (CPU), and a network adapter. First and second storage devices are persistent storage devices. In a first example, both kernel data and application data are updated in the network appliance in response to receiving the software update information. In a second example, only the kernel data is updated in the network appliance in response to receiving the software update information. In a third example, only the application data is updated in the network appliance in response to receiving the software update information. Indicator data included in the software update information determines the data to be updated in the network appliance.

    Abstract translation: 软件更新信息通过网络或本地存储设备传送到网络设备。 软件更新信息包括内核数据,应用程序数据或指示符数据。 网络设备包括第一存储设备,第二存储设备,操作存储器,中央处理单元(CPU)和网络适配器。 第一和第二存储设备是持久存储设备。 在第一示例中,响应于接收到软件更新信息,在网络设备中更新内核数据和应用数据。 在第二个例子中,响应于接收到软件更新信息,仅在网络设备中更新内核数据。 在第三示例中,响应于接收到软件更新信息,仅在网络设备中更新应用数据。 包含在软件更新信息中的指示符数据确定要在网络设备中更新的数据。

    Network appliance that determines what processor to send a future packet to based on a predicted future arrival time
    6.
    发明授权
    Network appliance that determines what processor to send a future packet to based on a predicted future arrival time 有权
    网络设备根据预计的未来到达时间确定要发送未来数据包的处理器

    公开(公告)号:US09071545B2

    公开(公告)日:2015-06-30

    申请号:US13668251

    申请日:2012-11-03

    CPC classification number: H04L45/30 H04L43/0852 H04L47/245 H04L47/283

    Abstract: A network appliance includes a network processor and several processing units. Packets a flow pair are received onto the network appliance. Without performing deep packet inspection on any packet of the flow pair, the network processor analyzes the flows, estimates therefrom the application protocol used, and determines a predicted future time when the next packet will likely be received. The network processor determines to send the next packet to a selected one of the processing units based in part on the predicted future time. In some cases, the network processor causes a cache of the selected processing unit to be preloaded shortly before the predicted future time. When the next packet is actually received, the packet is directed to the selected processing unit. In this way, packets are directed to processing units within the network appliance based on predicted future packet arrival times without the use of deep packet inspection.

    Abstract translation: 网络设备包括网络处理器和多个处理单元。 将流对的数据包接收到网络设备上。 网络处理器不对流对的任何数据包进行深度数据包检测,从而分析流量,从而估算所使用的应用协议,并确定下一个数据包可能被接收时的预测未来时间。 部分地基于预测的未来时间,网络处理器确定将下一个分组发送到所选择的一个处理单元。 在一些情况下,网络处理器使所选择的处理单元的高速缓存在预测的未来时间之前不久被预加载。 当实际接收到下一个分组时,分组被引导到所选择的处理单元。 以这种方式,基于预测的未来分组到达时间,分组被定向到网络设备内的处理单元,而不使用深度分组检查。

    Distributed credit FIFO link of a configurable mesh data bus
    7.
    发明授权
    Distributed credit FIFO link of a configurable mesh data bus 有权
    可配置的网状数据总线的分布式信用FIFO链路

    公开(公告)号:US09069649B2

    公开(公告)日:2015-06-30

    申请号:US13399846

    申请日:2012-02-17

    CPC classification number: G06F13/4022 G06F13/00 H04L47/39 H04L49/901

    Abstract: An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.

    Abstract translation: 基于岛的集成电路包括可配置的网状数据总线。 数据总线包括四个网格。 每个网格对于每个岛包括一个交叉开关和辐射半连接。 相邻岛屿的半连接对齐以形成交叉开关之间的连接。 链接被实现为两个分布式信用FIFO。 在一个方向上,链接部分涉及与第一岛的输出端口,第一寄存器链和与第二岛的输入端口相关联的第二FIFO相关联的FIFO。 当交易值通过FIFO并通过第二岛的交叉开关时,交叉开关中的仲裁器返回一个取得的信号。 所采集的信号通过第二个寄存器链回到第一个岛的信用计数电路。 信用计数电路维持分配信用FIFO的信用计数值。

    Inter-packet interval prediction learning algorithm
    8.
    发明授权
    Inter-packet interval prediction learning algorithm 有权
    分组间间隔预测学习算法

    公开(公告)号:US09042252B2

    公开(公告)日:2015-05-26

    申请号:US13675620

    申请日:2012-11-13

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Abstract translation: 设备接收作为流对的一部分的数据包,每个数据包共享一个应用协议。 设备通过对数据包执行深度数据包检测(DPI)来确定数据包的应用协议。 数据包大小被测量并转换成数据包大小状态。 分组大小状态,分组序列号和分组流方向用于创建应用协议估计表(APET)。 在正常操作期间使用APET来估计流对的应用协议,而不执行耗时的DPI。 然后,设备确定接收到的分组之间的分组间间隔。 分组间间隔被转换成分组间间隔状态。 分组间间隔状态和分组序列号用于创建分组间间隔预测表。 然后,设备为每个应用协议存储分组间间隔预测表。 在操作期间使用分组间间隔预测表来预测分组之间的分组间间隔。

    Transactional memory that supports a put with low priority ring command
    9.
    发明授权
    Transactional memory that supports a put with low priority ring command 有权
    支持低优先级环指令的事务内存

    公开(公告)号:US08972630B1

    公开(公告)日:2015-03-03

    申请号:US14037226

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    Abstract translation: 事务存储器(TM)包括控制电路管线和相关联的存储器单元。 存储单元存储多个环。 对于每个环,流水线保持头指针和尾指针。 管道的环操作阶段将维护指针,因为值被放置在环上并被取消。 如果环未满,则put命令会使TM将值放入环中。 如果环不为空,则get命令使TM取环, 如果环具有至少预定量的可用缓冲空间,则具有低优先级命令的put将导致TM将值放入环中。 从一组ring命令获取,使TM从最高优先级非空环(指定的一组环)获取一个值。

    Flow control using a local event ring in an island-based network flow processor
    10.
    发明授权
    Flow control using a local event ring in an island-based network flow processor 有权
    在基于岛屿的网络流处理器中使用本地事件环的流控制

    公开(公告)号:US08929376B2

    公开(公告)日:2015-01-06

    申请号:US13400008

    申请日:2012-02-17

    CPC classification number: H04L49/9047 H04L47/13 H04L49/102 H04L49/9084

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form a local event ring. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. The local event ring involves event ring circuits and event ring segments. In one example, a packet is received onto a first island. If an amount of a processing resource (for example, memory buffer space) available to the first island is below a threshold, then an event packet is communicated from the first island to a second island via the local event ring. In response, the second island causes a third island to communicate via a command/push/pull data bus with the first island, thereby increasing the amount of the processing resource available to the first island for handing incoming packets.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行组织的岛屿。 可配置的网状事件总线延伸穿过岛,并配置为形成本地事件环。 配置的mesh事件总线配置有通过可配置的网状控制总线接收的配置信息。 本地事件环包括事件环电路和事件环段。 在一个示例中,分组被接收到第一岛上。 如果第一岛可用的处理资源(例如,存储器缓冲空间)的量低于阈值,则事件分组通过本地事件环从第一岛传送到第二岛。 作为响应,第二岛使得第三岛通过命令/推/拉数据总线与第一岛进行通信,从而增加第一岛可用于处理输入分组的处理资源的量。

Patent Agency Ranking