Abstract:
A line card within a switching node coupled to a network is described. The line card includes a link interface for transmitting communications along a communication link within the network. The link interface further includes multiple logical interfaces having one or more partitions and one or more buffers. The partitions and the buffers accommodate multiple classes of service requirements for the communications transmitted within the network. In one embodiment, bandwidth management and connection admission control in a network encompasses both the line card and the system which uses the line card.
Abstract:
A method for populating location wiremap databases. In particular implementations, a method includes establishing a link layer connection with a client on a switch port, where the switch port is associated with a port identifier and is mapped to a location; identifying one or more connection attributes of the connection, where the connection attributes comprise a network layer address of the client; and transmitting the port identifier and the network layer address of the client to a location server.
Abstract:
A method for populating location wiremap databases. In particular implementations, a method includes establishing a link layer connection with a client on a switch port, where the switch port is associated with a port identifier and is mapped to a location; identifying one or more connection attributes of the connection, where the connection attributes comprise a network layer address of the client; and transmitting the port identifier and the network layer address of the client to a location server.
Abstract:
In an example embodiment, packets for a selected flow are replicated and sent over one or more diverse paths, such as a primary path and at least one secondary path, to a destination switching device. At the destination switching device, one copy of the replicated packets is selected for delivery to the destination, and the remaining copies are discarded. In the event that packets are not received at the destination switching device due to loss of connection on the primary path or packets are not timely delivered due to congestion on the primary path, a different path may be selected as the primary path.
Abstract:
A standby card is powered after an active card. A connection command is received by the standby card from the active card. The connection command is associated with a logical connection number (LCN) for a connection. The LCN is used as a first index to a location in a first memory area to retrieve a second index to a location in a second memory area. The second index is used to access the connection from the location in the second memory area.
Abstract:
The present disclosure discloses a method and network device for achieving enhanced performance with multiple CPU cores in a network device having a symmetric multiprocessing architecture. The disclosed method allows for storing, by each central processing unit (CPU) core, a non-atomic data structure, which is specific to each networking CPU core, in a memory shared by the plurality of CPU cores. Also, the memory is not associated with any locking mechanism. In response to a data packet is received by a particular CPU core, the disclosed system will update a value of the non-atomic data structure corresponding to the particular CPU core. The data structure may be a counter or a fragment table. Further, a dedicated CPU core is allocated to process only data packets received from other CPU cores, and is responsible for dynamically responding to queries receives from a control plane process.
Abstract:
The present disclosure discloses a method and network device for achieving enhanced performance with multiple CPU cores in a network device having a symmetric multiprocessing architecture. The disclosed method allows for storing, by each central processing unit (CPU) core, a non-atomic data structure, which is specific to each networking CPU core, in a memory shared by the plurality of CPU cores. Also, the memory is not associated with any locking mechanism. In response to a data packet is received by a particular CPU core, the disclosed system will update a value of the non-atomic data structure corresponding to the particular CPU core. The data structure may be a counter or a fragment table. Further, a dedicated CPU core is allocated to process only data packets received from other CPU cores, and is responsible for dynamically responding to queries receives from a control plane process.
Abstract:
In an example embodiment, packets for a selected flow are replicated and sent over one or more diverse paths, such as a primary path and at least one secondary path, to a destination switching device. At the destination switching device, one copy of the replicated packets is selected for delivery to the destination, and the remaining copies are discarded. In the event that packets are not received at the destination switching device due to loss of connection on the primary path or packets are not timely delivered due to congestion on the primary path, a different path may be selected as the primary path.