摘要:
A LAN interconnect device includes a plurality of Frame Processing Units (FPUs) for coupling each port of the device to a switch fabric. Each one of the Frame Processing Units includes an input section with input logic which prepares LS Headers and appends each one to a block of the frame as the block is forwarded to the switch fabric. The FPU, also, includes an output section with copy logic for copying and assembling frames to be forwarded to devices connected to the port. The copy decision is based upon the LS Header and configuration information in the port.
摘要:
A LAN switching system includes an Address Match Control line which can be set (activated) and is monitored by each port adapter card. If a port adapter card recognizes an address on the switch fabric, the adapter card copies the frame with the address and activates the Address match Control line. The set Address Match Control line causes the remaining port adapter cards to stop searching for a match. If the Address Match Control line is not set, the frame can be copied by all port adapters which are configured to do so.
摘要:
A device for interconnecting Local Area Networks (LANs) includes ports for attaching LAN segments and port modules for connecting the ports to a switch fabric. Each of the port modules include a mechanism which searches the Routing Information (RI) field of a Received frame to detect at least two Triplets (a minimum configuration for a LAN segment) indicating a Source path from an originator user and a Destination path to a destination user. The Triplet (single or in combination) is used to access a database (tables) which identifies the Port of Exit (POE) through which the frame is to be routed.
摘要:
A dynamic addressing technique mirrors data across multiple banks of a memory resource. Information stored in the memory banks is organized into separately addressable blocks, and memory addresses include a mirror flag. To write information mirrored across two memory banks, a processor issues a single write transaction with the mirror flag asserted. A memory controller detects that the mirror flag is asserted and, in response, waits for both memory banks to become available. At that point, the memory controller causes the write to be performed at both banks. To read data that has been mirrored across two memory banks, the processor issues a read with the mirror flag asserted. The memory controller checks the availability of both banks having the desired information. If either bank is available, the read request is accepted and the desired data is retrieved from the available bank and returned to the processor.
摘要:
A data path protocol eliminates most of the conventional read transactions required to transfer data between devices interconnected by a split transaction bus, such as a HyperTransport (HPT) bus. To that end, each device is configured to manage its own set of buffer descriptors, unlike previous data path protocols in which only one device managed all the buffer descriptors. As such, neither device has to perform a read transaction to retrieve a “free” buffer descriptor from the other device. As a result, only write transactions are performed for transferring descriptors across the HPT bus, thereby decreasing the amount of traffic over the bus and eliminating conventional latencies associated with read transactions. In addition, because descriptors are separately managed in each device, the data path protocol also conserves processing bandwidth that is traditionally consumed by managing ownership of the buffer descriptors within a single device.
摘要:
A packet communications system provides for point-to-point packet routing and multicast packet routing to limited subsets of nodes in the network, using a routing field in the packet header which is processed according to two different protocols. A third protocol is provided in which a packet can be multicast to the limited subset even when launched from a node which is not a member of the subset. The routing field includes a first portion which contains the route labels necessary to deliver the packet to the multicast subset. A second portion of the routing field contains the multicast subset identifier which can then be used to deliver the packet to all of the members of the multicast subset. Provision is made to backtrack deliver the packet to the last node identified before the multicast subset if that last node is itself a member of the subset.
摘要:
A buffer-management technique efficiently manages a set of data buffers accessible to first and second devices interconnected by a split transaction bus, such as a Hyper-Transport (HPT) bus. To that end, a buffer manager controls access to a set of “free” buffer descriptors, each free buffer descriptor referencing a corresponding buffer in the set of data buffers. Advantageously, the buffer manager ensures that the first and second devices are allocated a sufficient number of free buffer descriptors for use in a HPT data path protocol in which the first and second devices have access to respective sets of free buffer descriptors. Because buffer management over the HPT bus is optimized by the buffer manager, the amount of processing bandwidth traditionally consumed managing descriptors can be reduced.
摘要:
A backup CRF VLAN arrangement provides an alternate, redundant path for traffic between undistributed Concentrator Relay Functions (CRFs) located on separate switches interconnected by trunk links of a distributed token ring bridge. The backup CRF virtual local area network (VLAN) arrangement defines a backup network path which may be utilized if a primary active path is not a valid path to a backup network. Notably, the backup network comprises a special type of CRF that is distributed among the switches, but that has only one port active at any given time.
摘要:
Presently disclosed is an apparatus and method for returning control of bandwidth allocation and packet scheduling to the routing engine in a network communications device containing an ATM interface. Virtual circuit (VC) flow control is augmented by the addition of a second flow control feedback signal from each virtual path (VP). VP flow control is used to suspend scheduling of all VCs on a given VP when traffic has accumulated on enough VCs to keep the VP busy. A new packet segmenter is employed to segment traffic while preserving the first in, first out (FIFO) order in which packet traffic was received. Embodiments of the invention may be implemented using a two-level (per-VC and per-VP) scheduling hierarchy or may use as many levels of flow control feedback-derived scheduling as may be necessitated by multilevel scheduling hierarchies.
摘要:
A technique non-disruptively recovers from a processor failure in a multi-processor flow device, such as an intermediate network node of a computer network. Data relating to a particular data flow of a processor within the node is tagged with specific information used to detect and recover from a failure of the processor without affecting data from other processors of the node. A data path management device tags the data with the specific information reflecting the processor issuing the data and a state of the processor. When the tagged data subsequently passes through the data path management device, the specific information is compared with current information for the issuing processor. If the comparison indicates that the specific information is valid, the data path management device forwards the related data flow through the node. If the comparison indicates that the specific information is invalid, the data and its related data flow are discarded and “cleanly” purged from the node.