摘要:
A method and system for shaping traffic in a multi-level queuing hierarchy are disclosed. The hierarchy includes a high priority channel and a low priority channel, wherein traffic on the low priority channel is fragmented and interleaved with traffic from the high priority channel and traffic combined from the high priority and low priority channels has a maximum shape rate. The method includes linking a high priority token bucket to a low priority token bucket, transmitting data from the high priority channel, and decrementing the low priority token bucket by an amount corresponding to the data transmitted. Data is transmitted from the low priority channel only if the low priority bucket has available tokens.
摘要:
A database management and indexing technique provides coherent access to and update of dynamic configuration information stored in a database associated with a multiprocessing environment of an aggregation router. The multiprocessing environment comprises a forwarding engine configured as a computing matrix of processors that operate on packets in a parallel as well as a pipeline fashion. A unique handle, i.e., a virtual common coherency index (VCCI) value, is associated with an interface regardless of whether it is a virtual or a physical interface. When a packet enters the computing matrix, it is classified and assigned a VCCI value based upon the interface over which it is received at or transmitted from the router. The assigned VCCI value is then passed along with the packet to each feature that processes the packet.
摘要:
A method for identifying data is provided that includes receiving a data stream and performing a hashing operation on a portion of the data stream in order to identify a key that reflects an identity associated with the data stream. The method further includes storing a plurality of first and second hash table entries and comparing the key to the first and second hash table entries in order to evaluate if there is a match between the key and the first and second hash table entries.
摘要:
Virtual Local Area Network (VLAN) trunking over Asychronous Transfer Mode (ATM) Permanent Virtual Circuits (PVC), defined as VTAP, allows for aggregation of multiple VLAN traffic into a single data pipe in a Wide Area Network (WAN) environment. The largest benefit for the user is that a single PVC can be utilized to aggregate all of their VLAN traffic between two sites. Packets to be transmitted between two switches are first encapsulated with a VTAP header that contains pertinent information as to allow the receiving switch to process and forward the packet at the switch. Certain information contained in the VTAP is also used to determine the virtual path identifier/virtual channel identifier (VPI/VCI) of the destination switch wherein the packet is segmented into ATM cells having VPI/VCI prefixed to it for forwarding via the ATM network.
摘要:
A processing engine includes descriptor transfer logic that receives descriptors generated by a software controlled general purpose processing element. The descriptor transfer logic manages transactions that send the descriptors to resources for execution and receive responses back from the resources in response to the sent descriptors. The descriptor transfer logic can manage the allocation and operation of buffers and registers that initiate the transaction, track the status of the transaction, and receive the responses back from the resources all on behalf of the general purpose processing element.
摘要:
A mechanism synchronizes among processors of a processing engine in an intermediate network station. The processing engine is configured as a systolic array having a plurality of processors arrayed as rows and columns. The mechanism comprises a barrier synchronization mechanism that enables synchronization among processors of a column (i.e., different rows) of the systolic array. That is, the barrier synchronization function allows all participating processors within a column to reach a common point within their instruction code sequences before any of the processors proceed.
摘要:
A mechanism synchronizes instruction code executing on a processor of a processing engine in an intermediate network station. The processing engine is configured as a systolic array having a plurality of processors arrayed as rows and columns. The mechanism comprises a boundary (temporal) synchronization mechanism for cycle-based synchronization within a processor of the array. The synchronization mechanism is generally implemented using specialized synchronization micro operation codes (“opcodes”).
摘要:
A method and apparatus manages packet header buffers of a forwarding engine contained within an intermediate node, such as an aggregation router, of a computer network. Processors of the forwarding engine add and remove headers from packets using a packet header buffer, i.e., context memory, associated with each processor. Addition and removal of the headers occurs while preserving a portion of the “on-chip” context memory for passing state information to and between processors of a pipeline, and also for passing move commands to direct memory access (DMA) logic external to the forwarding engine. A wrap control function capability within the move command works in conjunction with the ability of the DMA logic to detect the end of the context and wrap to a specified offset within the context. That is, rather than wrapping to the beginning of a context, the wrap control capability specifies a predetermined offset within the context at which the wrap point occurs.