Installation of cached downward paths based on upward data traffic in a non-storing low-power and lossy network
    33.
    发明授权
    Installation of cached downward paths based on upward data traffic in a non-storing low-power and lossy network 有权
    在非存储低功耗和有损网络中,基于向上数据流量安装缓存的向下路径

    公开(公告)号:US09596180B2

    公开(公告)日:2017-03-14

    申请号:US14590672

    申请日:2015-01-06

    Abstract: In one embodiment, a method comprises: receiving, by a parent network device in a directed acyclic graph (DAG) network topology, a data packet destined toward a DAG root and having been output by a target device in the network topology; identifying, by the parent network device based on the received data packet, an identifiable condition for caching a downward path enabling the parent network device to reach the target device independent of any route table entry in the parent network device; and caching, in the parent network device, the downward path enabling the parent network device to reach the target device independent of any route table entry in the parent network device.

    Abstract translation: 在一个实施例中,一种方法包括:以有向非循环图(DAG)网络拓扑中的父网络设备接收目的地为DAG根并由网络拓扑中的目标设备输出的数据分组; 由所述父网络设备基于所接收的数据分组识别用于高速缓存向下路径的可识别条件,使所述父网络设备能够独立于所述父网络设备中的任何路由表条目到达所述目标设备; 以及在父网络设备中缓存使得父网络设备能够独立于父网络设备中的任何路由表条目的目标设备的向下路径。

    Hashing algorithm for network receive filtering
    34.
    发明授权
    Hashing algorithm for network receive filtering 有权
    用于网络接收过滤的哈希算法

    公开(公告)号:US09594842B2

    公开(公告)日:2017-03-14

    申请号:US14611105

    申请日:2015-01-30

    Abstract: Roughly described, a network interface device is assigned a maximum extent-of-search. A hash function is applied to the header information of each incoming packet, to generate a hash code for the packet. The hash code designates a particular subset of the table within which the particular header information should be found, and an iterative search is made within that subset. If the search locates a matching entry before the search limit is exceeded, then the incoming data packet is delivered to the receive queue identified in the matching entry. But if the search reaches the search limit before a matching entry is located, then device delivers the packet to a default queue, such as a kernel queue, in the host computer system. The kernel is then responsible for delivering the packet to the correct endpoint.

    Abstract translation: 大致描述了网络接口设备被分配最大的搜索范围。 散列函数被应用于每个输入分组的报头信息,以产生分组的哈希码。 哈希代码指定在其中应当找到特定头部信息的表的特定子集,并且在该子集内进行迭代搜索。 如果搜索在超出搜索限制之前找到匹配的条目,则传入数据包将被传递到匹配条目中标识的接收队列。 但是,如果在找到匹配的条目之前搜索达到搜索限制,则设备会将数据包传递到主机系统中的默认队列(如内核队列)。 然后,内核负责将数据包传递到正确的端点。

    PPI allocation request and response for accessing a memory system
    35.
    发明授权
    PPI allocation request and response for accessing a memory system 有权
    用于访问存储系统的PPI分配请求和响应

    公开(公告)号:US09559988B2

    公开(公告)日:2017-01-31

    申请号:US14464692

    申请日:2014-08-20

    CPC classification number: H04L49/3072 H04L45/742 H04L49/9042

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine in a PPI allocation request, and is allocated a PPI by the packet engine in a PPI allocation response, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 PDRSD在PPI分配请求中从分组引擎请求PPI,并且在PPI分配响应中由分组引擎分配PPI,然后标记要用PPI写入的分组部分,并发送分组部分和PPI 到包引擎。

    Method and apparatus for improving CAM learn throughput using a cache
    36.
    发明授权
    Method and apparatus for improving CAM learn throughput using a cache 有权
    用于改善CAM的方法和装置使用高速缓存来学习吞吐量

    公开(公告)号:US09559987B1

    公开(公告)日:2017-01-31

    申请号:US12239084

    申请日:2008-09-26

    CPC classification number: H04L49/3009 G11C15/00 H04L45/742 H04L45/7457

    Abstract: An apparatus and method of using a cache to improve a learn rate for a content-addressable memory (“CAM”) are disclosed. A network device such as a router or a switch, in one embodiment, includes a key generator, a searching circuit, and a key cache, wherein the key generator is capable of generating a first lookup key in response to a first packet. The searching circuit is configured to search the content of the CAM to match the first lookup key. If the first lookup key is not found in the CAM, the key cache stores the first lookup key in response to a first miss.

    Abstract translation: 公开了一种使用高速缓存来提高内容寻址存储器(“CAM”)的学习率的装置和方法。 在一个实施例中,诸如路由器或交换机的网络设备包括密钥生成器,搜索电路和密钥高速缓存,其中密钥生成器能够响应于第一数据包生成第一查找密钥。 搜索电路被配置为搜索CAM的内容以匹配第一查找密钥。 如果在CAM中没有找到第一个查找密钥,则密钥缓存存储响应于第一个缺失的第一查找密钥。

    System and method of high volume rule engine
    38.
    发明授权
    System and method of high volume rule engine 有权
    大容量规则引擎的系统和方法

    公开(公告)号:US09491069B2

    公开(公告)日:2016-11-08

    申请号:US13953090

    申请日:2013-07-29

    CPC classification number: H04L43/028 H04L45/742 H04L45/745 H04L63/0263

    Abstract: A rule engine configured with at least one hash table which summarizes the rules managed by the engine. The rule engine receives rules and automatically adjusts the hash table in order to relate to added rules and/or in order to remove cancelled rules. The adjustment may be performed while the rule engine is filtering packets, without stopping. The rules may be grouped into a plurality of rule types and for each rule type the rule engine performs one or more accesses to at least one hash table to determine whether any of the rules of that type match the packet. In some embodiments, the rule engine may automatically select the rule types responsive to a set of rules provided to the rule engine and adapt its operation to the specific rules it is currently handling, while not spending resources on checking rule types not currently used.

    Abstract translation: 配置有至少一个哈希表的规则引擎,其总结由引擎管理的规则。 规则引擎接收规则并自动调整哈希表,以便与添加的规则相关联和/或为了删除已取消的规则。 可以在规则引擎正在过滤数据包而不停止的情况下执行调整。 规则可以被分组为多个规则类型,并且对于每个规则类型,规则引擎执行对至少一个哈希表的一个或多个访问,以确定该类型的任何规则是否与分组匹配。 在一些实施例中,规则引擎可以响应于提供给规则引擎的一组规则来自动选择规则类型,并且将其操作适应其当前正在处理的特定规则,同时不花费资源来检查当前未使用的规则类型。

    Caching of look-up rules based on flow heuristics to enable high speed look-up
    39.
    发明授权
    Caching of look-up rules based on flow heuristics to enable high speed look-up 有权
    基于流启发式缓存查找规则,实现高速查找

    公开(公告)号:US09477604B2

    公开(公告)日:2016-10-25

    申请号:US14809139

    申请日:2015-07-24

    Abstract: In one embodiment, a computer program product includes a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code including computer readable program code configured to initialize an internal look-up table cache provided internally to a switching processor, the internal look-up table cache being configured to store a plurality of look-up entries and being organized into at least three segments: a persistent flows entries segment, a non-persistent flows entries segment, and an access control list (ACL) segment. Each look-up entry relates to a traffic flow which has been or is anticipated to be received by a switching processor configured to access the internal look-up table cache. The computer readable program code is also configured to manage the internal look-up table cache to store entries relating to a particular segment type into a corresponding segment of the internal look-up table cache.

    Abstract translation: 在一个实施例中,计算机程序产品包括具有由其体现的计算机可读程序代码的计算机可读存储介质,所述计算机可读程序代码包括被配置为初始化内部提供给切换处理器的内部查找表高速缓存的计算机可读程序代码, 内部查找表缓存被配置为存储多个查找条目并被组织成至少三个段:持久流条目段,非持久流条目段和访问控制列表(ACL)段。 每个查找条目涉及被配置为访问内部查找表高速缓存的已被或预期被切换处理器接收的业务流。 计算机可读程序代码还被配置为管理内部查找表高速缓存以将与特定段类型相关的条目存储到内部查找表高速缓存的相应段中。

    DATA PACKET RETRANSMISSION PROCESSING
    40.
    发明申请
    DATA PACKET RETRANSMISSION PROCESSING 审中-公开
    数据包返回处理

    公开(公告)号:US20160308765A1

    公开(公告)日:2016-10-20

    申请号:US14689100

    申请日:2015-04-17

    Abstract: Systems and methods for performing retransmission of data packets over a network. A node receives a data packet with a source and a destination address. The data packet is sent along a network path to the destination address, and information associated with the data packet is sent to a controller node that is independent of the network path. A controller receives information associated with a data packet from any forwarder node within a plurality of forwarder nodes each monitoring communications along separate communications paths. An indication of a receipt acknowledgement for the data packet is received from a second forwarder node that is separate from the first forwarder node and the controller node. The receipt acknowledgement is correlated with the data packet and based on the correlating, data associated with retransmission processing of the data packet is deleted.

    Abstract translation: 用于通过网络重传数据包的系统和方法。 节点接收具有源和目的地址的数据包。 数据包沿网络路径发送到目的地址,与数据包相关联的信息被发送到独立于网络路径的控制器节点。 控制器从多个转发器节点内的任何转发节点接收与数据分组相关联的信息,每个转发节点每个监控沿着分离的通信路径的通信。 从与第一转发器节点和控制器节点分离的第二转发器节点接收对数据分组的接收确认的指示。 接收确认与数据包相关,并且基于相关性,与数据包的重传处理相关联的数据被删除。

Patent Agency Ranking