摘要:
A non-blocking virtual switch architecture for a data communication network. The switch includes a plurality of input ports and output ports. Each input port may be connected to each output port by a directly connected network or by a mesh network. Thus, data packets may traverse the switch simultaneously with other packets. At each output port, buffer space is dedicated for queuing packets received from each of the input ports. An arbitration scheme is utilized to forward data from the buffers to the network. Accordingly, the use of a crossbar array, and associated traffic bottlenecks, are avoided. Rather, the system advantageously provides separate buffer space at each output port for every input port.
摘要:
A quality of service technique for a data communication network. Using a combination of Time Division Multiplexing (TDM) and packet switching, the system is configured to guarantee a predefined bandwidth for a client, which, in turn, helps manage delay and jitter in the data transmission. An ingress processor operates as a bandwidth filter, transmitting packet bursts to distribution channels for queuing in a queuing engine. The queuing engine holds the data packets for subsequent scheduled transmission over the network, which is governed by predetermined priorities. These priorities may be established by several factors including pre-allocated bandwidth, system conditions and other factors. A scheduler then transmits the data received by the queuing engine by a self-clocked fair queuing method.
摘要:
A metro switch and method for transporting data configured according to multiple different formats. In one aspect, a network system and method that provides for point-to-point communication of data of various different formats such as ATM, frame relay, PPP, Ethernet, etc. Accordingly, the invention may interface disparate network devices, such as private networks and other entities that operate according to various different protocols and that use various different media. At ingress points to the system, the data is received from data sources and configured according to a universal format. This allows data from origins that use different data formats and/or transmission media to be mixed and transported onto the same media. The data is then transported to one or more destinations using this format. At egress points of the system, the data is reconverted to its original format for use at its destination. Thus, transportation of various different communication services and distribution to respective destinations is provided through a high-speed interconnect. Because the data is converted to a universal format and reconverted to its original format as needed, various different data formats may be accommodated and may share the same communication media.
摘要:
The present invention is directed toward methods and apparatus for packet transmission scheduling in a data communication network. In one aspect, received data packets are assigned to an appropriate one of a plurality of scheduling heap data structures. Each scheduling heap data structure is percolated to identify a most eligible data packet in each heap data structure. A highest-priority one of the most-eligible data packets is identifying by prioritizing among the most-eligible data packets. This is useful because the scheduling tasks may be distributed among the hierarchy of schedulers to efficiently handle data packet scheduling. Another aspect provides a technique for combining priority schemes, such as strict priority and weighted fair queuing. This is useful because packets may have equal priorities or no priorities, such as in the case of certain legacy equipment.
摘要:
A technique for time division multiplex (TDM) forwarding of data streams. The system uses a common switch fabric resource for TDM and packet switching. In operation, large packets or data streams are divided into smaller portions upon entering a switch. Each portion is assigned a high priority for transmission and a tracking header for tracking it through the switch. Prior to exiting the switch, the portions are reassembled into the data stream. Thus, the smaller portions are passed using a nullstore-and-forwardnull technique. Because the portions are each assigned a high priority, the data stream is effectively nullcut-throughnull the switch. That is, the switch may still be receiving later portions of the stream while the switch is forwarding earlier portions of the stream. This technique of providing nullcut-throughnull using a store-and-forward switch mechanism reduces transmission delay and buffer over-runs that otherwise would occur in transmitting large packets or data streams.
摘要:
An address learning technique in a data communication network. A packet may be received by the network system when the ingress equipment does not yet have information to lookup the appropriate path for the packet based upon its destination media access control (MAC) address. The packet may then be broadcast or multicast to all possible or probable destinations for the packet. Each such destination adds an entry to its lookup table using the source MAC address and an identification of the path by which it received the packet. Then, when one of those destinations (acting as ingress equipment) receive a packet intended for that MAC address, it will have the necessary information in its lookup table to send the packet along the appropriate path. Thus, table entries are formed and stored in the remote destinations rather than in the ingress equipment.