Load balancing across multiple network address translation (NAT) instances and/or processors
    1.
    发明授权
    Load balancing across multiple network address translation (NAT) instances and/or processors 有权
    跨多个网络地址转换(NAT)实例和/或处理器之间的负载平衡

    公开(公告)号:US08005098B2

    公开(公告)日:2011-08-23

    申请号:US12205848

    申请日:2008-09-05

    IPC分类号: H04L12/28

    摘要: Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with load balancing across multiple network address translation (NAT) instances and/or processors. N network address translation (NAT) processors and/or instances are each assigned a portion of the source address traffic in order to load balance the network address translation among them. Additionally, the address space of translated addresses is partitioned and uniquely assigned to the NAT processors and/or instances such that the identification of the assigned NAT processor and/or instance associated with a received translated address can be readily determined there from, and then used to network address translate that received packet.

    摘要翻译: 公开了尤其涉及与多个网络地址转换(NAT)实例和/或处理器之间的负载平衡相关联的方法,装置,计算机存储介质,机制和装置。 N网络地址转换(NAT)处理器和/或实例各自被分配一部分源地址流量,以便在它们之间平衡网络地址转换。 另外,翻译的地址的地址空间被分割并且被唯一地分配给NAT处理器和/或实例,使得可以容易地确定所分配的NAT处理器和/或与所接收的转换地址相关联的实例的标识,然后使用 到网络地址转换收到的数据包。

    Vectorized software packet forwarding
    2.
    发明授权
    Vectorized software packet forwarding 有权
    矢量化软件包转发

    公开(公告)号:US07961636B1

    公开(公告)日:2011-06-14

    申请号:US10855213

    申请日:2004-05-27

    IPC分类号: H04L12/28

    摘要: An intermediate network node is configured to forward a plurality of packets concurrently, e.g., as a vector, rather than one packet at a time. As such, the node can load a single sequence of forwarding instructions that may be repeatedly executed for packets in the vector. In addition, the intermediate network node adaptively controls the rate at which it processes data packets through a directed forwarding graph. To that end, the intermediate node is configured to dynamically select the number of packets per vector, i.e., vector size, processed at each node of the forwarding graph. Further, the intermediate node also may be configured to dynamically select timing intervals for one or more “node timers” used to control the rate at which packets traverse through the graph. Illustratively, the vector size and node-timer intervals are selected so that the average latency through the forwarding graph is less than a predetermined target latency, e.g., 50 microseconds (μs).

    摘要翻译: 中间网络节点被配置为同时转发多个分组,例如作为向量,而不是一次一个分组。 因此,节点可以加载可以对向量中的分组重复执行的单个转发指令序列。 另外,中间网络节点通过定向转发图自适应地控制其处理数据分组的速率。 为此,中间节点被配置为动态地选择在转发图的每个节点处处理的每个向量的分组数量,即向量大小。 此外,中间节点还可以被配置为动态地选择用于控制分组穿过该图的速率的一个或多个“节点定时器”的定时间隔。 示例性地,选择矢量大小和节点定时器间隔,使得通过转发图的平均等待时间小于预定的目标等待时间,例如50微秒(μs)。

    Flexible WAN protocol call admission control algorithm
    3.
    发明授权
    Flexible WAN protocol call admission control algorithm 有权
    灵活的WAN协议呼叫接纳控制算法

    公开(公告)号:US07289441B1

    公开(公告)日:2007-10-30

    申请号:US10200653

    申请日:2002-07-22

    IPC分类号: H04L12/16

    摘要: An intermediate network node is configured to drop or reject new client sessions when its available resources are below a predetermined level. In this manner, the intermediate node can efficiently process a large number of new session attempts at substantially the same time. The intermediate node monitors the availability of its resources by calculating a load metric. The load metric is based on one or more partial load metrics, each corresponding to a different system resource. The load metric is compared with a predetermined value to determine whether the node has enough available resources to continue establishing new client sessions. Alternatively, the intermediate node rejects new client sessions when a total number of allocated “abstract resource units” rises above a predetermined level. That is, client sessions are assigned a predetermined number of abstract resource units on a per-protocol basis and a resource counter stores the number of abstract resource units allocated by the intermediate node. Each protocol is assigned a different time interval for a timing wheel, after which time the number of abstract resource units assigned for the protocol is subtracted from the resource counter. The intermediate node actively or passively rejects new client sessions until the timing wheel sufficiently decreases the counter below the predetermined level.

    摘要翻译: 中间网络节点被配置为当其可用资源低于预定水平时丢弃或拒绝新的客户端会话。 以这种方式,中间节点可以基本上同时有效地处理大量新的会话尝试。 中间节点通过计算负载度量来监视其资源的可用性。 负载度量基于一个或多个部分负载指标,每个部分负载度量对应于不同的系统资源。 将负载度量与预定值进行比较,以确定节点是否具有足够的可用资源以继续建立新的客户端会话。 或者,当所分配的“抽象资源单元”的总数高于预定水平时,中间节点拒绝新的客户端会话。 也就是说,客户端会话在每个协议的基础上分配预定数量的抽象资源单元,资源计数器存储由中间节点分配的抽象资源单元的数量。 为每个协议分配不同的定时轮时间间隔,之后从资源计数器中减去为协议分配的抽象资源单元的数量。 中间节点主动或被动地拒绝新的客户端会话,直到定时轮足够地将计数器降低到低于预定水平。

    Load Balancing across Multiple Network Address Translation (NAT) Instances and/or Processors
    4.
    发明申请
    Load Balancing across Multiple Network Address Translation (NAT) Instances and/or Processors 有权
    多个网络地址转换(NAT)实例和/或处理器之间的负载平衡

    公开(公告)号:US20100061380A1

    公开(公告)日:2010-03-11

    申请号:US12205848

    申请日:2008-09-05

    IPC分类号: H04L12/56

    摘要: Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with load balancing across multiple network address translation (NAT) instances and/or processors. N network address translation (NAT) processors and/or instances are each assigned a portion of the source address traffic in order to load balance the network address translation among them. Additionally, the address space of translated addresses is partitioned and uniquely assigned to the NAT processors and/or instances such that the identification of the assigned NAT processor and/or instance associated with a received translated address can be readily determined there from, and then used to network address translate that received packet.

    摘要翻译: 公开了尤其涉及与多个网络地址转换(NAT)实例和/或处理器之间的负载平衡相关联的方法,装置,计算机存储介质,机制和装置。 N网络地址转换(NAT)处理器和/或实例各自被分配一部分源地址流量,以便在它们之间平衡网络地址转换。 另外,翻译的地址的地址空间被分割并且被唯一地分配给NAT处理器和/或实例,使得可以容易地确定所分配的NAT处理器和/或与所接收的转换地址相关联的实例的标识,然后使用 到网络地址转换收到的数据包。

    Buffer management technique for a hypertransport data path protocol
    5.
    发明授权
    Buffer management technique for a hypertransport data path protocol 有权
    超传输数据路径协议的缓冲管理技术

    公开(公告)号:US07111092B1

    公开(公告)日:2006-09-19

    申请号:US10826076

    申请日:2004-04-16

    IPC分类号: G06F13/00

    CPC分类号: G06F13/4221

    摘要: A buffer-management technique efficiently manages a set of data buffers accessible to first and second devices interconnected by a split transaction bus, such as a Hyper-Transport (HPT) bus. To that end, a buffer manager controls access to a set of “free” buffer descriptors, each free buffer descriptor referencing a corresponding buffer in the set of data buffers. Advantageously, the buffer manager ensures that the first and second devices are allocated a sufficient number of free buffer descriptors for use in a HPT data path protocol in which the first and second devices have access to respective sets of free buffer descriptors. Because buffer management over the HPT bus is optimized by the buffer manager, the amount of processing bandwidth traditionally consumed managing descriptors can be reduced.

    摘要翻译: 缓冲器管理技术有效地管理由分离事务总线(例如超传输(HPT)总线)互连的第一和第二设备可访问的一组数据缓冲器。 为此,缓冲区管理器控制对一组“空闲”缓冲区描述符的访问,每个空闲缓冲区描述符引用该组数据缓冲区中的相应缓冲区。 有利地,缓冲器管理器确保第一和第二设备被分配足够数量的可用于HPT数据路径协议的空闲缓冲器描述符,其中第一和第二设备可以访问相应的一组空闲缓冲区描述符。 由于缓冲管理器优化了HPT总线上的缓冲区管理,所以可以减少传统上消耗管理描述符的处理带宽。

    Hypertransport data path protocol
    6.
    发明授权
    Hypertransport data path protocol 有权
    超传输数据路径协议

    公开(公告)号:US07117308B1

    公开(公告)日:2006-10-03

    申请号:US10818670

    申请日:2004-04-06

    IPC分类号: G06F12/00

    CPC分类号: G06F13/387

    摘要: A data path protocol eliminates most of the conventional read transactions required to transfer data between devices interconnected by a split transaction bus, such as a HyperTransport (HPT) bus. To that end, each device is configured to manage its own set of buffer descriptors, unlike previous data path protocols in which only one device managed all the buffer descriptors. As such, neither device has to perform a read transaction to retrieve a “free” buffer descriptor from the other device. As a result, only write transactions are performed for transferring descriptors across the HPT bus, thereby decreasing the amount of traffic over the bus and eliminating conventional latencies associated with read transactions. In addition, because descriptors are separately managed in each device, the data path protocol also conserves processing bandwidth that is traditionally consumed by managing ownership of the buffer descriptors within a single device.

    摘要翻译: 数据路径协议消除了通过分组事务总线(如HyperTransport(HPT)总线)互连的设备之间传输数据所需的大多数常规读取事务。 为此,每个设备配置为管理其自己的一组缓冲区描述符,与之前的数据路径协议不同,其中只有一个设备管理所有缓冲区描述符。 因此,两个设备都不得不执行读取事务以从另一个设备检索“空闲”缓冲区描述符。 因此,仅执行用于在HPT总线上传送描述符的写入事务,从而减少总线上的业务量并消除与读取事务相关联的传统延迟。 此外,由于在每个设备中分别管理描述符,所以数据路径协议还节省了传统上通过管理单个设备中的缓冲区描述符的所有权而消耗的处理带宽。

    Bounded index extensible hash-based IPv6 address lookup method
    7.
    发明授权
    Bounded index extensible hash-based IPv6 address lookup method 有权
    有界索引可扩展基于散列的IPv6地址查找方法

    公开(公告)号:US07325059B2

    公开(公告)日:2008-01-29

    申请号:US10439022

    申请日:2003-05-15

    IPC分类号: G06F15/173

    摘要: The present invention provides a technique for efficiently looking up address-routing information in an intermediate network node, such as a router. To that end, the node locates routing information stored in its memory using one or more “lookup” tables (LUT) which can be searched using a small, bounded number of dependent lookups, thereby reducing the number of dependent lookups conventionally performed. The LUTs are arranged so each table provides routing information for network addresses whose subnet mask lengths are within a different range (“stride”) of mask lengths. According to the technique, the node locates a network address's routing information by searching the LUTs, in order of decreasing prefix lengths, until the routing information is found. Preferably, several tables are searched in parallel. A match in a LUT may further point to a small MTRIE that enables the final bits of a prefix to be matched. That final MTRIE is searched using a relatively small, bounded number of dependent lookups.

    摘要翻译: 本发明提供了一种用于在诸如路由器之类的中间网络节点中有效地查找地址路由信息的技术。 为此,节点使用可以使用小的有界数量的依赖查找来搜索的一个或多个“查找”表(LUT)定位存储在其存储器中的路由信息​​,从而减少常规执行的依赖查找的数量。 LUT被布置成使得每个表为网络掩码长度在掩模长度的不同范围(“stride”)内的网络地址提供路由信息。 根据该技术,节点通过按照减少前缀长度的顺序搜索LUT来定位网络地址的路由信息​​,直到找到路由信息。 优选并行地搜索几个表。 LUT中的匹配还可以指向能够匹配前缀的最后比特的小MTRIE。 使用相对较小的有限数量的依赖查找来搜索最后的MTRIE。