摘要:
A multi-layer network element for forwarding received packets from an input port to one or more output ports. The packet is examined to look for first and second forwarding information. A packet is also assigned to a class and provided with default packet forwarding information. An associative memory is searched once for each type of information. The results from the two searches are combined with the default packet forwarding information to forward the packet to the appropriate one or more output ports. In some instances, the results of the first search dominate the forwarding decision, in other, the results of the second search dominate the forwarding decision, and in still other instances, the default information dominates.
摘要:
A method and apparatus for providing hardware-assisted CPU access to a forwarding database is described. According to one aspect of the present invention, a switch fabric provides access to a forwarding database on behalf of a processor. The switch fabric includes a memory access interface configured to arbitrate access to a forwarding database memory. The switch fabric also includes a search engine coupled to the memory access interface and to multiple input ports. The search engine is configured to schedule and perform accesses to the forwarding database memory and to transfer forwarding decisions retrieved therefrom to the input ports. The switch fabric further includes command execution logic that is configured to interface with the processor for performing forwarding database accesses requested by the processor. According to another aspect of the invention one or more commands are provided to implement the following functions: (1) learning a supplied address; (2) reading associated data corresponding to a supplied search key; (3) aging forwarding database entries; (4) invalidating entries; (5) accessing mask data, such as mask data that may be stored in a mask per bit (MPB) content addressable memory (CAM), corresponding to a particular search key; (6) replacing forwarding database entries; and (7) accessing entries in the forwarding database.
摘要:
A system and method for updating packet headers using hardware that maintains the high performance of the network element. In one embodiment, the system includes an input port process (IPP) that buffers the input packet received and forwards header information to the search engine. The search engine searches a database maintained on the switch element to determine the type of the packet. In one embodiment, the type may indicate whether the packet can be routed in hardware. In another embodiment, the type may indicate whether the packet supports VLANs. The search engine sends the packet type information to the IPP along with the destination address (DA) to be updated if the packet is to be routed, or a VLAN tag if the packet has been identified to be forwarded to a particular VLAN. The IPP, during transmission of the packet to a packet memory selectively replaces the corresponding fields, e.g., DA field or VLAN tag field; the modified packet is stored in the packet memory. Associated with the packet memory are control fields containing control field information conveyed to the packet memory by the IPP. An output port process (OPP) reads the modified input packet and the control field information and selectively performs additional modifications to the modified input packet and issue control signals to the output interface (i.e., MAC). The MAC, based upon the control signals, replaces the source address field with the address of the MAC and generates a CRC that is appended to the end of the packet.
摘要:
A multi-layer distributed network element for relaying packets according to known routing protocols. A distributed architecture of multiple subsystems delivers routing at wire-speed performance across subnetworks. Each subsystem includes a forwarding memory and an associated memory and is configured to identify unicast and multicast packets for routing purposes, modify the packets in hardware, including replace VLAN information, and forward the packets to the next hop. The routing decisions are made in the inbound subsystem, and packets are forwarded, if necessary given the network topology, through a separate outbound subsystem.
摘要:
A multi-layer switch search engine architecture is provided. According to one aspect of the present invention, a switch fabric includes a search engine, and a packet header processing unit. The search engine may be coupled to a forwarding database memory and one or more input ports. The search engine is configured to schedule and perform accesses to the forwarding database memory and to transfer forwarding decisions to the one or more input ports. The header processing unit is coupled to the search engine and includes an arbitrated interface for coupling to the one or more input ports. The header processing unit is configured to receive a packet header from one or more of the input ports and is further configured to construct a search key for accessing the forwarding database memory based upon a predetermined portion of the packet header. The predetermined portion of the packet header is selected based upon a packet class with which the packet header is associated.
摘要:
A multi-layer distributed network element for relaying packets according to known routing protocols. A distributed architecture of multiple subsystems delivers routing at wire-speed performance across subnetworks. Each subsystem includes a forwarding memory and an associated memory and is configured to identify unicast and multicast packets for routing purposes, modify the packets in hardware, including replace VLAN information, and forward the packets to the next hop. The routing decisions are made in the inbound subsystem, and packets and associated control information are forwarded, if necessary given the network topology, through a separate outbound subsystem. When packets traverse the internal links from one subsystem to another, encapsulation operations are conducted such as appending an additional cyclic redundancy code (CRC) to the packet before going through the internal link.
摘要:
Methods and systems are provided for steering network packets. According to one embodiment a method is provided for steering incoming network packets. Each network packet processing resource of a network routing/switching device is dynamically assigned to one or more network interfaces of the network routing/switching device. Each of the network packet processing resources includes one or more processing elements and a memory. Incoming network packets received by the network interfaces are steered to an appropriate network packet processing resource based on the dynamic assignment.
摘要:
Methods and systems are provided for steering network packets. According to one embodiment, a dynamically configurable steering table is stored within a memory of each network interface of a networking routing/switching device. The steering table represents a mapping that logically assigns each of the network interfaces to one of multiple packet processing resources of the network routing/switching device. The steering table has contained therein information indicative of a unique identifier/address of the assigned packet processing resource. Responsive to receiving a packet on a network interface, the network interface performs Layer 1 or Layer 2 steering of the received packet to the assigned packet processing resource by retrieving the information indicative of the unique identifier/address of the assigned packet processing resource from the steering table based on a channel identifier associated with the received packet and the received packet is processed by the assigned packet processing resource.
摘要:
Methods and systems are provided for steering network packets. According to one embodiment, a mapping associates a processing resource with a network interface module (netmod) and/or a number of line interface ports included within the netmod. In one embodiment, the mapping is configurable within the processing resource and pushed to the netmod. The netmod uses the mapping to steer network packets to the processing resource when the packets conform to the mapping. The mapping may be additionally used to identify a specific process that is to be performed against the packets once the processing resource receives the steered packets from the netmod.
摘要:
Methods and Systems are provided for steering network packets and bridging media channels to a single processing resource. A mapping associates a processing resource with a network interface module (Netmod) or a number of line interface ports included within the Netmod. In one embodiment, the mapping is configurable within the processing resource and pushed to the Netmod. The Netmod uses the mapping to steer network packets to the processing resource when the packets conform to the mapping. Moreover, the mapping can be used to identify a specific process that is to be performed against the packets once the processing resource receives the steered packets from the Netmod.