STORING AN ENTROPY SIGNAL FROM A SELF-TIMED LOGIC BIT STREAM GENERATOR IN AN ENTROPY STORAGE RING
    161.
    发明申请
    STORING AN ENTROPY SIGNAL FROM A SELF-TIMED LOGIC BIT STREAM GENERATOR IN AN ENTROPY STORAGE RING 有权
    在熵存储环境中存储自定义逻辑位流发生器的入侵信号

    公开(公告)号:US20150088950A1

    公开(公告)日:2015-03-26

    申请号:US14037312

    申请日:2013-09-25

    Inventor: Gavin J. Stark

    CPC classification number: G06F7/584

    Abstract: A Self-Timed Logic Entropy Bit Stream Generator (STLEBSG) outputs a bit stream having non-deterministic entropy. The bit stream is supplied onto an input of a signal storage ring so that entropy of the bit stream is then stored in the ring as the bit stream circulates in the ring. Depending on the configuration of the ring, the bit stream as it circulates undergoes permutations, but the signal storage ring nonetheless stores the entropy of the injected bit stream. In one example, the STLEBSG is disabled and the bit stream is no longer supplied to the ring, but the ring continues to circulate and stores entropy of the original bit stream. With the STLEBSG disabled, a signal output from the ring is used to generate one or more random numbers.

    Abstract translation: 自定时逻辑熵比特流发生器(STLEBSG)输出具有非确定性熵的比特流。 比特流被提供到信号存储环的输入端,使得当比特流在环中循环时,比特流的熵随后被存储在环中。 根据环的配置,其循环中的位流经历置换,但是信号存储环仍然存储注入的比特流的熵。 在一个示例中,STLEBSG被禁用并且比特流不再被提供给环,但是环继续循环并存储原始比特流的熵。 禁用STLEBSG时,使用来自环的信号输出来产生一个或多个随机数。

    PICO ENGINE POOL TRANSACTIONAL MEMORY ARCHITECTURE
    162.
    发明申请
    PICO ENGINE POOL TRANSACTIONAL MEMORY ARCHITECTURE 有权
    PICO发动机池交互式存储器架构

    公开(公告)号:US20150058551A1

    公开(公告)日:2015-02-26

    申请号:US13970601

    申请日:2013-08-20

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/467 G06F15/163 H04L45/745 H04L45/7457

    Abstract: A transactional memory (TM) includes a selectable bank of hardware algorithm prework engines, a selectable bank of hardware lookup engines, and a memory unit. The memory unit stores result values (RVs), instructions, and lookup data operands. The transactional memory receives a lookup command across a bus from one of a plurality of processors. The lookup command includes a source identification value, data, a table number value, and a table set value. In response to the lookup command, the transactional memory selects one hardware algorithm prework engine and one hardware lookup engine to perform the lookup operation. The selected hardware algorithm prework engine modifies data included in the lookup command. The selected hardware lookup engine performs a lookup operation using the modified data and lookup operands provided by the memory unit. In response to performing the lookup operation, the transactional memory returns a result value and optionally an instruction.

    Abstract translation: 事务存储器(TM)包括可选择的硬件算法预处理引擎组,可选择的硬件查找引擎组和存储器单元。 存储单元存储结果值(RV),指令和查找数据操作数。 事务存储器从多个处理器之一接收总线上的查找命令。 查找命令包括源标识值,数据,表号值和表设置值。 响应于查找命令,事务存储器选择一个硬件算法预处理引擎和一个硬件查找引擎来执行查找操作。 所选的硬件算法预处理引擎修改查找命令中包含的数据。 所选择的硬件查找引擎使用由存储器单元提供的经修改的数据和查找操作数来执行查找操作。 响应于执行查找操作,事务存储器返回结果值和可选的指令。

    Traffic Data Pre-Filtering
    163.
    发明申请
    Traffic Data Pre-Filtering 有权
    流量数据预过滤

    公开(公告)号:US20150003237A1

    公开(公告)日:2015-01-01

    申请号:US13929809

    申请日:2013-06-28

    CPC classification number: H04L45/745 H04L49/00

    Abstract: A network appliance includes a first and second compliance checker and an action identifier. Each compliance checker includes a first and second lookup operator. Traffic data is received by the network appliance. A field within the traffic data is separated into a first and second subfield. The first lookup operator performs a lookup operation on the first subfield of the traffic data and generates a first lookup result. The second lookup operator performs a lookup operation on the second subfield of the traffic data and generates a second lookup result. A compliance result is generated by a lookup result analyzer based on the first and second lookup results. An action is generated by an action identifier based at least in part on the compliance result. The action indicates whether or not additional inspection of the traffic data is required. The first and second lookup operators may perform different lookup methodologies.

    Abstract translation: 网络设备包括第一和第二顺应性检查器和动作标识符。 每个符合性检查器包括第一和第二查找运算符。 流量数据由网络设备接收。 业务数据内的字段被分成第一和第二子字段。 第一查询运算符对业务数据的第一子字段执行查找操作,并生成第一查找结果。 第二查找运算符对业务数据的第二子字段执行查找操作,并产生第二查找结果。 基于第一和第二查找结果的查找结果分析器生成合规结果。 至少部分地基于合规结果,由动作标识符生成动作。 该动作指示是否需要额外检查交通数据。 第一和第二查找运算符可以执行不同的查找方法。

    EFFICIENT COMPLEX NETWORK TRAFFIC MANAGEMENT IN A NON-UNIFORM MEMORY SYSTEM
    164.
    发明申请
    EFFICIENT COMPLEX NETWORK TRAFFIC MANAGEMENT IN A NON-UNIFORM MEMORY SYSTEM 有权
    在非均匀存储系统中的高效的复杂网络交通管理

    公开(公告)号:US20140330991A1

    公开(公告)日:2014-11-06

    申请号:US13875968

    申请日:2013-05-02

    Abstract: A network appliance includes a first processor, a second processor, a first storage device, and a second storage device. A first status information is stored in the first storage device. The first processor is coupled to the first storage device. A queue of data is stored in the second storage device. The first status information indicates if traffic data stored in the queue of data is permitted to be transmitted. The second processor is coupled to the second storage device. The first processor communicates with the second processor. The traffic data includes packet information. The first storage device is a high speed memory only accessible to the first processor. The second storage device is a high capacity memory accessible to multiple processors. The first status information is a permitted bit that indicates if the traffic data within the queue of data is permitted to be transmitted.

    Abstract translation: 网络设备包括第一处理器,第二处理器,第一存储设备和第二存储设备。 第一状态信息存储在第一存储装置中。 第一处理器耦合到第一存储设备。 数据队列存储在第二存储设备中。 第一状态信息指示是否允许发送存储在数据队列中的业务数据。 第二处理器耦合到第二存储设备。 第一处理器与第二处理器通信。 业务数据包括分组信息。 第一存储设备是只能由第一处理器访问的高速存储器。 第二存储设备是可由多个处理器访问的高容量存储器。 第一状态信息是指示允许发送数据队列内的业务数据的允许位。

    TRANSACTIONAL MEMORY THAT PERFORMS A PMM 32-BIT LOOKUP OPERATION
    165.
    发明申请
    TRANSACTIONAL MEMORY THAT PERFORMS A PMM 32-BIT LOOKUP OPERATION 有权
    执行PMM 32位查找操作的交互式存储器

    公开(公告)号:US20140136798A1

    公开(公告)日:2014-05-15

    申请号:US13675394

    申请日:2012-11-13

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括一个内存地址。 响应该命令,TM拉动输入值(IV)。 存储器地址用于从存储器读取包含多个结果值(RV),多个引用值和多个前缀值的单词。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV的部分是查询键值(LKV)。 基于前缀值生成掩码值。 LKV由每个掩码值屏蔽,从而产生与参考值进行比较的多个掩蔽值。 基于比较,查找表生成用于选择结果值的选择器值。 所选择的结果值然后经由总线传送到处理器。

    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM
    166.
    发明申请
    INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM 有权
    分组间隔预测学习算法

    公开(公告)号:US20140133320A1

    公开(公告)日:2014-05-15

    申请号:US13675620

    申请日:2012-11-13

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.

    Abstract translation: 设备接收作为流对的一部分的数据包,每个数据包共享一个应用协议。 设备通过对数据包执行深度数据包检测(DPI)来确定数据包的应用协议。 数据包大小被测量并转换成数据包大小状态。 分组大小状态,分组序列号和分组流方向用于创建应用协议估计表(APET)。 在正常操作期间使用APET来估计流对的应用协议,而不执行耗时的DPI。 然后,设备确定接收到的分组之间的分组间间隔。 分组间间隔被转换成分组间间隔状态。 分组间间隔状态和分组序列号用于创建分组间间隔预测表。 然后,设备为每个应用协议存储分组间间隔预测表。 在操作期间使用分组间间隔预测表来预测分组之间的分组间间隔。

    NETWORK APPLIANCE THAT DETERMINES WHAT PROCESSOR TO SEND A FUTURE PACKET TO BASED ON A PREDICTED FUTURE ARRIVAL TIME
    167.
    发明申请
    NETWORK APPLIANCE THAT DETERMINES WHAT PROCESSOR TO SEND A FUTURE PACKET TO BASED ON A PREDICTED FUTURE ARRIVAL TIME 有权
    网络设备,确定处理器发送未来分组到基于预计的未来到达时间

    公开(公告)号:US20140126367A1

    公开(公告)日:2014-05-08

    申请号:US13668251

    申请日:2012-11-03

    CPC classification number: H04L45/30 H04L43/0852 H04L47/245 H04L47/283

    Abstract: A network appliance includes a network processor and several processing units. Packets a flow pair are received onto the network appliance. Without performing deep packet inspection on any packet of the flow pair, the network processor analyzes the flows, estimates therefrom the application protocol used, and determines a predicted future time when the next packet will likely be received. The network processor determines to send the next packet to a selected one of the processing units based in part on the predicted future time. In some cases, the network processor causes a cache of the selected processing unit to be preloaded shortly before the predicted future time. When the next packet is actually received, the packet is directed to the selected processing unit. In this way, packets are directed to processing units within the network appliance based on predicted future packet arrival times without the use of deep packet inspection.

    Abstract translation: 网络设备包括网络处理器和多个处理单元。 将流对的数据包接收到网络设备上。 网络处理器不对流对的任何数据包进行深度数据包检测,从而分析流量,从而估算所使用的应用协议,并确定下一个数据包可能被接收时的预测未来时间。 部分地基于预测的未来时间,网络处理器确定将下一个分组发送到所选择的一个处理单元。 在一些情况下,网络处理器使所选择的处理单元的高速缓存在预测的未来时间之前不久被预加载。 当实际接收到下一个分组时,分组被引导到所选择的处理单元。 以这种方式,基于预测的未来分组到达时间,分组被定向到网络设备内的处理单元,而不使用深度分组检查。

    TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC LOOK-UP, ADD AND LOCK OPERATION
    168.
    发明申请
    TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC LOOK-UP, ADD AND LOCK OPERATION 有权
    执行原子查看,添加和锁定操作的交互式记忆

    公开(公告)号:US20140075147A1

    公开(公告)日:2014-03-13

    申请号:US13609039

    申请日:2012-09-10

    Abstract: A transactional memory (TM) receives an Atomic Look-up, Add and Lock (ALAL) command across a bus from a client. The command includes a first value. The TM pulls a second value. The TM uses the first value to read a set of memory locations, and determines if any of the locations contains the second value. If no location contains the second value, then the TM locks a vacant location, adds the second value to the vacant location, and sends a result to the client. If a location contains the second value and it is not locked, then the TM locks the location and returns a result to the client. If a location contains the second value and it is locked, then the TM returns a result to the client. Each location has an associated data structure. Setting the lock field of a location locks access to its associated data structure.

    Abstract translation: 事务性存储器(TM)通过客户端的总线接收原子查询,添加和锁定(ALAL)命令。 该命令包含第一个值。 TM拉第二个值。 TM使用第一个值来读取一组存储器位置,并确定是否有任何位置包含第二个值。 如果没有位置包含第二个值,则TM锁定空闲位置,将第二个值添加到空闲位置,并将结果发送给客户端。 如果某个位置包含第二个值并且未锁定,则TM锁定位置并将结果返回给客户端。 如果一个位置包含第二个值并被锁定,那么TM将结果返回给客户机。 每个位置都有相关的数据结构。 设置位置的锁定字段锁定其相关数据结构的访问。

    Transactional Memory that Performs a Direct 24-BIT Lookup Operation
    169.
    发明申请
    Transactional Memory that Performs a Direct 24-BIT Lookup Operation 有权
    执行直接24位查找操作的事务内存

    公开(公告)号:US20140025920A1

    公开(公告)日:2014-01-23

    申请号:US13552627

    申请日:2012-07-18

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. Only final result values are stored in memory. The command includes a base address, a starting bit position, and mask size. In response to the lookup command, the TM pulls an input value (IV). A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV and the base address are used to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a result location value (RLV) generated from the portion of the IV. A word selector circuit and arithmetic circuits are used to generate the memory address and RLV. The TM sends the selected RV to the processor.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 只有最终的结果值存储在内存中。 该命令包括基地址,起始位位置和掩码大小。 响应于查找命令,TM拉取输入值(IV)。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV和基地址的部分用于生成内存地址。 存储器地址用于从存储器读取包含多个结果值(RV)的单词。 使用多路复用电路和从IV部分产生的结果位置值(RLV)来选择来自该单词的一个RV。 字选择器电路和运算电路用于产生存储器地址和RLV。 TM将所选择的RV发送到处理器。

    Transactional Memory that Performs a Direct 32-bit Lookup Operation
    170.
    发明申请
    Transactional Memory that Performs a Direct 32-bit Lookup Operation 有权
    执行直接32位查找操作的事务内存

    公开(公告)号:US20140025918A1

    公开(公告)日:2014-01-23

    申请号:US13552605

    申请日:2012-07-18

    CPC classification number: H04L12/4625 G06F9/3004 G06F12/06 G06F15/163

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a base address, a starting bit position, and a mask size. In response to the lookup command, the TM pulls an input value (IV). The TM uses the starting bit position and the mask size to select a portion of the IV. A first sub-portion of the portion of the IV and the base address are summed to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a second sub-portion of the portion of the IV. If the selected RV is a final value, then lookup operation is complete and the TM sends the RV to the processor, otherwise the TM performs another lookup operation based upon the selected RV.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括基地址,起始位位置和掩码大小。 响应于查找命令,TM拉取输入值(IV)。 TM使用起始位位置和掩码大小来选择IV的一部分。 将IV的部分和基地址的第一子部分相加以生成存储器地址。 存储器地址用于从存储器读取包含多个结果值(RV)的单词。 使用多路复用电路和IV部分的第二子部分选择来自该单词的一个RV。 如果所选的RV是最终值,则查找操作完成,并且TM将RV发送到处理器,否则TM基于所选择的RV执行另一查找操作。

Patent Agency Ranking