LABEL-BASED FORWARDING WITH ENHANCED SCALABILITY
    11.
    发明申请
    LABEL-BASED FORWARDING WITH ENHANCED SCALABILITY 审中-公开
    基于标签的前向扩展与增强的可扩展性

    公开(公告)号:US20160156551A1

    公开(公告)日:2016-06-02

    申请号:US14634842

    申请日:2015-03-01

    CPC classification number: H04L45/50 H04L45/24

    Abstract: A method for communication includes configuring a router to forward data packets in a network in accordance with MPLS labels appended to the packets. A group of two or more of the interfaces is defined as a multi-path routing group in a forwarding table within the router. A plurality of records are stored in an ILM in the router, corresponding to different, respective label IDs, all pointing to the set of the entries in the forwarding table that belong to the multi-path routing group. Upon receiving in the router an incoming data packet having a label ID corresponding to any given record in the plurality, one of the interfaces in the group is selected, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, for forwarding the incoming data packet without changing the label ID.

    Abstract translation: 一种通信方法包括配置路由器,以根据附加到分组的MPLS标签在网络中转发数据分组。 一组两个或多个接口被定义为路由器内的转发表中的多路径路由组。 多个记录存储在路由器的ILM中,对应于不同的各个标签ID,全部指向属于多路径路由组的转发表中的条目集合。 在路由器中接收到具有与多个中的任何给定记录相对应的标签ID的输入数据分组时,响应于给定记录和转发表中的一组条目选择组中的接口之一, 给定的记录点,用于转发传入的数据包而不改变标签ID。

    Dragonfly Plus: Communication Over Bipartite Node Groups Connected by a Mesh Network
    12.
    发明申请
    Dragonfly Plus: Communication Over Bipartite Node Groups Connected by a Mesh Network 有权
    Dragonfly Plus:通过网状网络连接的双向节点组进行通信

    公开(公告)号:US20160028613A1

    公开(公告)日:2016-01-28

    申请号:US14337334

    申请日:2014-07-22

    CPC classification number: H04L45/122 H04L45/14

    Abstract: A communication network includes multiple nodes, which are arranged in groups such that the nodes in each group are interconnected in a bipartite topology and the groups are interconnected in a mesh topology. The nodes are configured to convey traffic between source hosts and respective destination hosts by routing packets among the nodes on paths that do not traverse any intermediate hosts other than the source and destination hosts.

    Abstract translation: 通信网络包括多个节点,这些节点被分组布置,使得每个组中的节点以二分之一拓扑互连,并且组以网状拓扑互连。 节点被配置为通过在不经过源主机和目的主机之外的任何中间主机的路径上的节点之间路由分组来在源主机和相应的目的主机之间传送流量。

    GLOBAL BANDWIDTH-AWARE ADAPTIVE ROUTING

    公开(公告)号:US20250071063A1

    公开(公告)日:2025-02-27

    申请号:US18377642

    申请日:2023-10-06

    Abstract: Systems and methods herein are for global bandwidth-aware adaptive routing in a network communication and include at least one switch to determine an event associated with a change in network bandwidth between a local host and a remote host, where the at least one switch is further to provide routing protocols for the network communication, and where the routing protocols is to be used to modify an adaptive routing in the at least one switch for selection from different routes for the network communication between the local host and the remote host.

    EARLY AND EFFICIENT PACKET TRUNCATION

    公开(公告)号:US20250016110A1

    公开(公告)日:2025-01-09

    申请号:US18890429

    申请日:2024-09-19

    Abstract: Networking devices, systems, and methods are provided. In one example, a method includes receiving a packet at a networking device; evaluating the packet; based on the evaluation of the packet, truncating the packet from a first size to a second size that is smaller than the first size; and storing the truncated packet in a buffer prior to transmitting the truncated packet with the networking device.

    Allocation of shared reserve memory

    公开(公告)号:US12192122B2

    公开(公告)日:2025-01-07

    申请号:US18581423

    申请日:2024-02-20

    Abstract: A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.

    BI-DIRECTIONAL ENCRYPTION/DECRYPTION DEVICE FOR UNDERLAY AND OVERLAY OPERATIONS

    公开(公告)号:US20240236059A1

    公开(公告)日:2024-07-11

    申请号:US18615674

    申请日:2024-03-25

    Abstract: Technologies for bi-directional encryption and decryption for underlay and overlay operations are described. One network device a path-selection circuit that operates in a first mode or a second mode. In the first mode, the path-selection circuit receives a first incoming packet on a first port, sends it to a security circuitry to decrypt the first incoming packet to obtain a first decrypted packet, sends the first decrypted packet to a processing circuitry to process the first decrypted packet to obtain a first outgoing packet, and sends the first outgoing packet to a second port of the network device. In the second mode, the path-selection circuit receives a second incoming packet on a third port, sends it to the processing circuitry to de-encapsulate the second incoming packet to obtain a second outgoing packet, and sends the second outgoing packet to a fourth port of the network device.

    Allocation of shared reserve memory
    18.
    发明公开

    公开(公告)号:US20240195754A1

    公开(公告)日:2024-06-13

    申请号:US18581423

    申请日:2024-02-20

    Abstract: A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.

Patent Agency Ranking