Compartmentalization of the user network interface to a device
    151.
    发明授权
    Compartmentalization of the user network interface to a device 有权
    将用户网络接口与设备隔开

    公开(公告)号:US09331906B1

    公开(公告)日:2016-05-03

    申请号:US14551057

    申请日:2014-11-23

    Abstract: A device has physical network interface port through which a user can monitor and configure the device. A backend process and a virtual machine (VM) execute on a host operating system (OS). A front end user interface process executes on the VM, and is therefore compartmentalized in the VM. There is no front end user interface executing on the host OS outside the VM. The only management access channel into the device is via a first communication path through the physical network interface port, to the VM, up the VM's stack, and to the front end process. If the backend process is to be instructed to take an action, then the front end process forwards an application layer instruction to the backend process via a second communication path. The instruction passes down the VM stack, across a virtual secure network link, up the host stack, and to the backend process.

    Abstract translation: 设备具有物理网络接口端口,用户可以通过该端口监视和配置设备。 后台进程和虚拟机(VM)在主机操作系统(OS)上执行。 前端用户界面进程在VM上执行,因此在VM中进行分区。 在VM外部的主机操作系统上没有执行前端用户界面。 设备中唯一的管理访问通道是通过物理网络接口端口,VM,VM堆栈以及前端进程的第一通信路径。 如果要指示后端进程采取行动,则前端进程通过第二通信路径将应用层指令转发到后端进程。 该指令通过虚拟机堆栈,跨虚拟安全网络链接,主机堆栈以及后端进程传递。

    Reordering PCP flows as they are assigned to virtual channels
    152.
    发明授权
    Reordering PCP flows as they are assigned to virtual channels 有权
    将PCP流重新排序为虚拟通道

    公开(公告)号:US09270488B2

    公开(公告)日:2016-02-23

    申请号:US14321744

    申请日:2014-07-01

    Inventor: Joseph M. Lamb

    CPC classification number: H04L12/467 H04L45/745 H04L49/25

    Abstract: A Network Flow Processor (NFP) integrated circuit receives, via each of a plurality of physical MAC ports, PCP (Priority Code Point) flows. The NFP also maintains, for each of a plurality of virtual channels, a linked list of buffers. There is one port enqueue engine for each physical MAC port. For each PCP flow received via the physical MAC port associated with a port enqueue engine, the engine causes frame data of the flow to be loaded into one particular linked list of buffers. Each engine has a lookup table circuit that is configurable so that the relative priorities of the PCP flows are reordered as the PCP flows are assigned to virtual channels. A PCP flow with a higher PCP value can be assigned to a lower priority virtual channel, whereas a PCP flow with a lower PCP value can be assigned to a higher priority virtual channel.

    Abstract translation: 网络流处理器(NFP)集成电路通过多个物理MAC端口中的每一个接收PCP(优先代码点)流。 对于多个虚拟通道中的每一个,NFP还维护缓冲器的链接列表。 每个物理MAC端口都有一个端口入队引擎。 对于通过与端口入队引擎相关联的物理MAC端口接收到的每个PCP流,引擎使流的帧数据被加载到一个特定的缓冲器链表中。 每个引擎具有可配置的查找表电路,使得当PCP流被分配给虚拟通道时,PCP流的相对优先级被重新排序。 具有较高PCP值的PCP流可以被分配给较低优先级的虚拟信道,而具有较低PCP值的PCP流可以被分配给较高优先级的虚拟信道。

    TABLE FETCH PROCESSOR INSTRUCTION USING TABLE NUMBER TO BASE ADDRESS TRANSLATION
    153.
    发明申请
    TABLE FETCH PROCESSOR INSTRUCTION USING TABLE NUMBER TO BASE ADDRESS TRANSLATION 审中-公开
    表格处理器指令使用表编号进行基址转换

    公开(公告)号:US20150317163A1

    公开(公告)日:2015-11-05

    申请号:US14267342

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    KICK-STARTED RUN-TO-COMPLETION PROCESSOR HAVING NO INSTRUCTION COUNTER
    154.
    发明申请
    KICK-STARTED RUN-TO-COMPLETION PROCESSOR HAVING NO INSTRUCTION COUNTER 审中-公开
    没有指令计数器的踢动运行完成处理器

    公开(公告)号:US20150317160A1

    公开(公告)日:2015-11-05

    申请号:US14267298

    申请日:2014-05-01

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor includes no instruction counter and only fetches instructions either: as a result of being prompted from the outside by an input data value and/or an initial fetch information value, or as a result of execution of a fetch instruction. Initially the processor is not clocking. An incoming value kick-starts the processor to start clocking and to fetch a block of instructions from a section of code in a table. The input data value and/or the initial fetch information value determines the section and table from which the block is fetched. A LUT converts a table number in the initial fetch information value into a base address where the table is found. Fetch instructions at the ends of sections of code cause program execution to jump from section to section. A finished instruction causes an output data value to be output and stops clocking of the processor.

    Abstract translation: 流水线运行完成处理器不包括指令计数器,并且仅获取指令:由于通过输入数据值和/或初始提取信息值从外部提示,或作为执行执行的结果 指令。 最初处理器没有计时。 输入值启动,启动处理器开始计时,并从表中的代码段获取指令块。 输入数据值和/或初始获取信息值确定从其获取块的部分和表。 LUT将初始获取信息值中的表号转换为找到表的基地址。 在代码段的末尾获取指令导致程序执行从一个部分跳转到另一个部分。 完成的指令将输出一个输出数据值,并停止处理器的时钟。

    Hardware prefix reduction circuit
    155.
    发明授权
    Hardware prefix reduction circuit 有权
    硬件前缀缩减电路

    公开(公告)号:US09164794B2

    公开(公告)日:2015-10-20

    申请号:US13970599

    申请日:2013-08-20

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/467

    Abstract: A hardware prefix reduction circuit includes a plurality of levels. Each level includes an input conductor, an output conductor, and a plurality of nodes. Each node includes a buffer and a storage device that stores a digital logic level. One node further includes an inverter. Another node further includes an AND gate with two non-inverting inputs. Another node further includes an AND gate with an inverting input and a non-inverting input. One bit of an input value, such as an internet protocol address, is communicated on the input conductor. The first level of the prefix reduction circuit includes two nodes and each subsequent level includes twice as many nodes as is included in the preceding level. A digital logic level is individually programmed into each storage device. The digital logic levels stored in the storage devices determines the prefix reduction algorithm implemented by the hardware prefix reduction circuit.

    Abstract translation: 硬件前缀缩减电路包括多个电平。 每个级别包括输入导体,输出导体和多个节点。 每个节点包括存储数字逻辑电平的缓冲器和存储设备。 一个节点还包括一个逆变器。 另一节点还包括具有两个同相输入的“与”门。 另一个节点还包括具有反相输入和非反相输入的与门。 诸如互联网协议地址的输入值的一位在输入指示器上传送。 前缀缩减电路的第一级包括两个节点,并且每个后续级别包括在前一级中包括的两倍的节点。 数字逻辑电平被分别编程到每个存储设备中。 存储在存储装置中的数字逻辑电平确定由硬件前缀缩减电路实现的前缀缩减算法。

    NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY
    156.
    发明申请
    NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY 有权
    网络接口设备将虚拟NIDS的配置信息的总线写入主机到小型内存中

    公开(公告)号:US20150220449A1

    公开(公告)日:2015-08-06

    申请号:US14172844

    申请日:2014-02-04

    Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. A virtual NID is configured by configuration information in an appropriate one of a set of smaller blocks in a high-speed memory on the NID. There is a smaller block for each virtual NID. A virtual machine on the host can configure its virtual NID by writing configuration information into a larger block in PCIe address space. Circuitry on the NID detects that the PCIe write is into address space occupied by the larger blocks. If the write is into this space, then address translation circuitry converts the PCIe address into a smaller address that maps to the appropriate one of the smaller blocks associated with the virtual NID to be configured. If the PCIe write is detected not to be an access of a larger block, then the NID does not perform the address translation.

    Abstract translation: 网络托管服务器的网络接口设备(NID)实现多个虚拟NID。 虚拟NID由NID中的高速存储器中的一组较小块中的适当的一个配置信息配置。 每个虚拟NID都有一个较小的块。 主机上的虚拟机可以通过将配置信息写入PCIe地址空间中的较大块来配置其虚拟NID。 NID上的电路检测到PCIe写入到较大块占用的地址空间中。 如果写入该空间,则地址转换电路将PCIe地址转换成较小的地址,该地址映射到与要配置的虚拟NID相关联的较小块中的适当的一个。 如果检测到PCIe写入不是较大块的访问,则NID不执行地址转换。

    RESOURCE ALLOCATION WITH HIERARCHICAL SCOPE
    157.
    发明申请
    RESOURCE ALLOCATION WITH HIERARCHICAL SCOPE 有权
    资源分配与分级范围

    公开(公告)号:US20150128119A1

    公开(公告)日:2015-05-07

    申请号:US14074632

    申请日:2013-11-07

    CPC classification number: G06F8/54 G06F8/447

    Abstract: A source code symbol can be declared to have a scope level indicative of a level in a hierarchy of scope levels, where the scope level indicates a circuit level or a sub-circuit level in the hierarchy. A novel instruction to the linker can define the symbol to be of a desired scope level. Location information indicates where different amounts of the object code are to be loaded into a system. A novel linker program uses the location information, along with the scope level information of the symbol, to uniquify instances of the symbol if necessary to resolve name collisions of symbols having the same scope. After the symbol uniquification step, the linker performs resource allocation. A resource instance is allocated to each symbol. The linker then replaces each instance of the symbol in the object code with the address of the allocated resource instance, thereby generating executable code.

    Abstract translation: 源代码符号可以被声明为具有指示范围级别的层级中的级别的范围级别,其中范围级别指示层级中的电路级别或子电路级别。 对于链接器的新颖的指令可以将符号定义为期望的范围级别。 位置信息指示将不同量的目标代码加载到系统中的位置。 一个新的链接程序使用位置信息以及符号的范围级别信息来定义符号的实例,以解决具有相同范围的符号的名称冲突。 在符号唯一化步骤之后,链接器执行资源分配。 资源实例被分配给每个符号。 然后,链接器将目标代码中的符号的每个实例用分配的资源实例的地址替换,从而生成可执行代码。

    LINKER THAT STATICALLY ALLOCATES NON-MEMORY RESOURCES AT LINK TIME
    158.
    发明申请
    LINKER THAT STATICALLY ALLOCATES NON-MEMORY RESOURCES AT LINK TIME 审中-公开
    在链接时静态分配非内存资源的链接

    公开(公告)号:US20150128117A1

    公开(公告)日:2015-05-07

    申请号:US14074606

    申请日:2013-11-07

    CPC classification number: G06F8/54

    Abstract: A novel linker statically allocates resource instances of a non-memory resource at link time. In one example, a novel declare instruction in source code declares a pool of resource instances, where the resource instances are instances of the non-memory resource. A novel allocate instruction is then used to instruct the linker to allocate a resource instance from the pool to be associated with a symbol. Thereafter the symbol is usable in the source code to refer to an instance of the non-memory resource. At link time the linker allocates an instance of the non-memory resource to the symbol and then replaces each instance of the symbol with an address of the non-memory resource instance, thereby generating executable code. Examples of instances of non-memory resources include ring circuits and event filter circuits.

    Abstract translation: 链接时,一个新的链接器静态分配非内存资源的资源实例。 在一个示例中,源代码中的一个新颖的声明指令声明资源实例池,其中资源实例是非内存资源的实例。 然后使用新的分配指令来指示链接器从池中分配与符号相关联的资源实例。 此后,该符号在源代码中可用于引用非存储器资源的实例。 在链接时,链接器会将非内存资源的实例分配给符号,然后用非内存资源实例的地址替换符号的每个实例,从而生成可执行代码。 非存储器资源的例子包括环形电路和事件滤波器电路。

    ALLOCATE INSTRUCTION AND API CALL THAT CONTAIN A SYBMOL FOR A NON-MEMORY RESOURCE
    159.
    发明申请
    ALLOCATE INSTRUCTION AND API CALL THAT CONTAIN A SYBMOL FOR A NON-MEMORY RESOURCE 有权
    分配指令和API呼叫包含非存储资源的SYBMOL

    公开(公告)号:US20150128113A1

    公开(公告)日:2015-05-07

    申请号:US14074640

    申请日:2013-11-07

    CPC classification number: G06F8/41 G06F8/457 G06F8/54

    Abstract: A novel allocate instruction and a novel API call are received onto a compiler. The allocate instruction includes a symbol that identifies a non-memory resource instance. The API call is a call to perform an operation on a non-memory resource instance, where the particular instance is indicated by the symbol in the API call. The compiler replaces the API call with a set of API instructions. A linker then allocates a value to be associated with the symbol, where the allocated value is one of a plurality of values, and where each value corresponds to a respective one of the non-memory resource instances. After allocation, the linker generates an amount of executable code, where the API instructions in the code: 1) are for using the allocated value to generate an address of a register in the appropriate non-memory resource instance, and 2) are for accessing the register.

    Abstract translation: 一个新的分配指令和一个新的API调用被接收到一个编译器上。 分配指令包括标识非内存资源实例的符号。 API调用是对非内存资源实例执行操作的调用,其中特定实例由API调用中的符号指示。 编译器使用一组API指令替换API调用。 链接器然后分配要与符号相关联的值,其中分配的值是多个值中的一个,并且其中每个值对应于非存储器资源实例中的相应一个。 分配后,链接器生成一定量的可执行代码,其中代码中的API指令:1)用于使用分配的值在适当的非内存资源实例中生成寄存器的地址,以及2)用于访问 登记册。

    TRANSACTIONAL MEMORY THAT SUPPORTS A GET FROM ONE OF A SET OF RINGS COMMAND
    160.
    发明申请
    TRANSACTIONAL MEMORY THAT SUPPORTS A GET FROM ONE OF A SET OF RINGS COMMAND 有权
    支持从一组环形指令获取的交易记忆

    公开(公告)号:US20150089165A1

    公开(公告)日:2015-03-26

    申请号:US14037239

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/3836 G06F9/3004 H04L45/74

    Abstract: A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).

    Abstract translation: 事务存储器(TM)包括控制电路管线和相关联的存储器单元。 存储单元存储多个环。 对于每个环,流水线保持头指针和尾指针。 管道的环操作阶段将维护指针,因为值被放置在环上并被取消。 如果环未满,则put命令会使TM将值放入环中。 如果环不为空,则get命令使TM取环, 如果环具有至少预定量的可用缓冲空间,则具有低优先级命令的put将导致TM将值放入环中。 从一组ring命令获取,使TM从最高优先级非空环(指定的一组环)获取一个值。

Patent Agency Ranking