Abstract:
Techniques for implementing deadlock avoidance in a leaf-spine network are described. In one embodiment, a method includes monitoring traffic of a plurality of packets at a leaf switch in a network having a leaf-spine topology. The method includes marking a packet with an identifier associated with an inbound uplink port of the leaf switch when the packet is received from one of a first spine switch and a second spine switch. The method includes detecting a valley routing condition upon determining that the packet marked with the identifier is being routed to an outbound uplink port of the leaf switch to be transmitted to the first spine switch or the second spine switch. Upon detecting the valley routing condition, the method includes dropping packets associated with a no-drop class of service when a packet buffer of the inbound uplink port reaches a predetermined threshold.
Abstract:
Disclosed are systems, methods, and computer-readable storage media for synchronizing the secondary vPC node to the primary vPC node in a BFD protocol over a VxLAN channel with a remote node. In some embodiments of the present technology a primary vPC node can receive a packet from the remote node. The primary vPC node can then determine the packet includes either a MAC address corresponding to the primary vPC node or a secondary vPC node, and at least one inner packet identifier. Additionally, the primary networking switch can identify an access control list (ACL) entry from a set of ACL entries based on the at least one inner packet identifier. Subsequently, based on the ACL entry, the primary vPC node can generate a copy of the packet. After which, the primary vPC node can transmit the packet to the secondary vPC node.
Abstract:
In one embodiment, longest prefix matching (LPM) operations are performed on a value in multiple interspersed prefix length search spaces to determine an overall longest prefix matching result in a packet switching system. A first LPM lookup unit performs a first LPM operation on the particular lookup value in a first search space finding a first longest matching prefix, and a second LPM lookup unit performs a second LPM operation on the particular lookup value in a second search space finding a second longest matching prefix. The longer of the first and second longest matching prefixes determines the overall LPM. In one embodiment, the first search space and the second search space include non-default route prefixes with interspersed prefix lengths matching a same value, such as, but not limited to the particular lookup value (e.g., a destination address of a packet).
Abstract:
In accordance with one example embodiment, a system configured for programming a network layer multicast address entry in a routing table of an ingress line card module is disclosed. The network layer multicast address entry includes a network layer address associated with at least one egress line card. The system is further configured for programming a data link layer multicast routing address entry in a routing table of a fabric card module in which the data link layer multicast routing address entry corresponds to the network layer multicast address entry.
Abstract:
In one embodiment, a method includes receiving a request to add a prefix to memory for a route lookup at a forwarding device, the memory comprising a plurality of pivot tiles for storing pivot entries, each of the pivot entries comprising a plurality of prefixes and a pointer to a trie index, searching at the forwarding device, a dynamic pool of the pivot tiles based on a base-width associated with the prefix, allocating at least a portion of the pivot tile to the base-width and creating a pivot entry for the prefix and other prefixes with a corresponding base-width, and dynamically updating prefixes stored on the pivot tiles based on route changes to optimize storage of prefixes on the pivot tiles. An apparatus and logic are also disclosed herein.
Abstract:
In one embodiment an approach is provided to efficiently program routes on line cards and fabric modules in a modular router to avoid hot spots and thus avoid undesirable packet loss. Each fabric module includes two separate processors or application specific integrated circuits (ASICs). In another embodiment, each fabric module processor is replaced by a pair of fabric module processors arranged in series with each other, and each processor is responsible for routing only, e.g., IPv4 or IPv6 traffic. The pair of fabric module processors communicates with one another via a trunk line and any packet received at either one of the pair is passed to the other of the pair before being passed back to a line card.
Abstract:
An example method for hierarchical programming of dual-stack switches in a network environment is provided and includes receiving packets from the network at a line card in the modular switch, a first portion of the packets being destined to Internet Protocol version 6 (IPv6) destination IP (DIP) addresses and a second portion of the packets being destined to IPv4 DIP addresses, and performing hierarchical lookups of the IPv6 DIP addresses and the IPv4 DIP addresses. Layer 3 (L3) lookups for the IPv6 DIP addresses are performed at the line card, and L3 lookups for IPv4 DIP addresses are performed at a fabric module in the modular switch. The line card and the fabric module are interconnected inside a chassis of the modular switch. In specific embodiments, the method further comprises inspecting the packets' destination Media Access Control (DMAC) addresses comprising router MAC addresses indicative of IPv6 or IPv4 address families.
Abstract:
An example method for hierarchical programming of dual-stack switches in a network environment is provided and includes receiving packets from the network at a line card in the modular switch, a first portion of the packets being destined to Internet Protocol version 6 (IPv6) destination IP (DIP) addresses and a second portion of the packets being destined to IPv4 DIP addresses, and performing hierarchical lookups of the IPv6 DIP addresses and the IPv4 DIP addresses. Layer 3 (L3) lookups for the IPv6 DIP addresses are performed at the line card, and L3 lookups for IPv4 DIP addresses are performed at a fabric module in the modular switch. The line card and the fabric module are interconnected inside a chassis of the modular switch. In specific embodiments, the method further comprises inspecting the packets' destination Media Access Control (DMAC) addresses comprising router MAC addresses indicative of IPv6 or IPv4 address families.
Abstract:
A method is provided in one example and includes broadcasting a switching node identifier associated with a first link-state protocol enabled switching node to a plurality of link-state protocol enabled switching nodes. The plurality of link-state protocol enabled switching nodes are in communication with one another by a link-state protocol cloud. The method further includes broadcasting a priority associated with the first link-state protocol enabled switching node to the plurality of link-state protocol enabled switching nodes. The method further includes broadcasting connectivity information of the first link-state protocol enabled switching node to the plurality of link-state protocol enabled switching nodes using the link-state protocol cloud. The connectivity information includes connectivity of the first link-state protocol enabled switching node with at least one spanning tree protocol enabled switching node.
Abstract:
Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.