摘要:
Distributed communication equipment architectures and techniques are disclosed. A host system includes an expansion unit through which control information and communication traffic may be exchanged with an expansion system. The expansion system is thereby controllable by a controller at the host system, significantly simplifying the design and reducing the cost of the expansion system. The expansion unit for a host system may also provide one or more configurable communication link interfaces. Each configurable interface may be independently configured as a network-side interface for connection to upstream communication equipment or as an access-side expansion interface for connection to an expansion system, allowing provisioning of network and access interfaces at the host system as needed.
摘要:
The present invention relates to a switching unit with a low-latency flow control. Queuing parameters of ingress queues, wherein the incoming traffic is backlogged, are measured to detect a short term traffic increase. An additional bandwidth is then negotiated to accommodate this unexpected additional amount of traffic, provided that the corresponding input and output termination modules still dispose of available bandwidth, and disregarding temporarily fairness. This additional bandwidth allows this unexpected additional amount of traffic to be drained from the ingress queue as soon as possible, without waiting for the next system bandwidth fair re-distribution, thereby improving the traffic latency through the switching unit.
摘要:
The present invention relates to a switching unit with a scalable and QoS aware flow control. The actual schedule rate of an egress queue, wherein the outgoing traffic belonging to a particular class of service is backlogged, is measured and compared to its expected schedule rate. If the egress queue is scheduled below expectation, then the bandwidth of every virtual ingress-to-egress pipe connecting an ingress queue, wherein the incoming traffic belonging to the same class of service is backlogged before transmission through the switch core fabric, to that egress queue is increased, thereby feeding that egress queue with more data units.
摘要:
System for obtaining an efficient and scaleable flow control in a large packet switched network including ingress termination boards (B1″) linked to egress termination boards (B4″) by means of virtual ingress to egress flow control links through a switch core.The flow control transmission link between a port of an ingress termination board and a port of an egress termination board comprises at least two virtual ingress to egress flow controlled traffic pipes (VIEP″a, VIEP″b), one pipe handling all the traffic between the two ports which is going towards communication channels for which no congestion is detected at the level of the egress termination board, the other pipe handling all the traffic going towards communication channels for which congestion is detected.
摘要:
A switch at a transmission end of a system including a number of memory devices defining queues for receiving traffic to be switched, each queue having an associated predetermined priority classification, and a processor for controlling the transmission of traffic from the queues. The processor transmits traffic from the higher priority queues before traffic from lower priority queues. The processor monitors the queues to determine whether traffic has arrived at a queue having a higher priority classification than the queue from which traffic is currently being transmitted. The processor suspends the current transmission after transmission of the current minimum transmittable element if traffic has arrived at a higher priority queue, transmits traffic from the higher priority queue, and then resumes the suspended transmission. At a receiving end, a switch that includes a processor separates the interleaved traffic into output queues for reassembly of individual traffic streams from the data stream.
摘要:
Systems (1) comprising aggregation sub-systems (14,15) and tributary sub-systems (11-13) comprising direct aggregation interfaces (31-33,41-43) for exchanging traffic with the aggregation sub-systems (14,15) get increased flexibilities by providing at least one tributary sub-system (12) with an indirect aggregation interface (35) for exchanging traffic with an other indirect aggregation interface (44) of an other tributary sub-system (11) in the system (1). Traffic can be exchanged not just between a direct aggregation interface (31-33,41-43) of tributary sub-systems (11-13) and aggregation sub-systems (14,15) but also between indirect aggregation interfaces (34-36,44-46) of tributary sub-systems (11-13). By providing at least one tributary sub-system (12) with a further indirect aggregation interface (45) for exchanging traffic with a yet other indirect aggregation interface (36) of a yet other tributary sub-system (13) in the system (1), the tributary sub-system (12) can exchange traffic with two other tributary sub-systems (11,13). The tributary sub-systems (11-13) are tributary line terminations and the aggregation sub-systems (14,15) are aggregation network terminations.
摘要:
Network processors commonly utilize DRAM chips for the storage of data. Each DRAM chip contains multiple banks for quick storage of data and access to that data. Latency in the transfer or the ‘write’ of data into memory can occur because of a phenomenon referred to as memory bank polarization. By a procedure called quadword rotation, this latency effect is effectively eliminated. Data frames received by the network processor are transferred to a receive queue (FIFO). The frames are divided into segments that are written into the memory of the DRAM in accordance with a formula that rotates the distribution of each segment into the memory banks of the DRAM.
摘要:
The invention relates to a telecommunication network having IP packet-supporting capabilities, which includes a load distribution processing function, either centralized or distributed, by means of which a load distribution function may be applied to sets of paths between network nodes or sets of links of network trunks. The load distribution processing function handles different load distribution functions. Each of the different load distribution functions is associated to a different network input unit involved in the load distribution for a set of paths between network nodes or a set of trunk links. The invention also relates to a method of load distribution in a telecommunication network as summarized above.
摘要:
Switch fabrics (10) comprise first stages (11) for receiving multicast input signals (A-C) and second stages (12) for in response to the input signals generating output signals. The switch fabrics (10) are coupled to detectors (31) for detecting parameters indicating conditions of the second stages (12) per output signal and for generating detection results per output signal, and are coupled to controllers (21) for, in response to the detection results, controlling the second stages (12) per output signal. Such switch fabrics (10) handle output congestion in a more individual way. In case of one part of the second stage (12) being congested, the copying of the multicast input signals (A-C) into output signals and their internal transmission no longer need to be ceased. Only one of the output signals corresponding with the congested part of the second stage (12) cannot be delivered. Further detectors (32) detect further input signals (A-C) comprising segments of the same protocol data unit as the input signals (A-C).
摘要:
A switch at a transmission end of a system including a number of memory devices defining queues for receiving traffic to be switched, each queue having an associated predetermined priority classification, and a processor for controlling the transmission of traffic from the queues. The processor transmits traffic from the higher priority queues before traffic from lower priority queues. The processor monitors the queues to determine whether traffic has arrived at a queue having a higher priority classification than the queue from which traffic is currently being transmitted. The processor suspends the current transmission after transmission of the current minimum transmittable element if traffic has arrived at a higher priority queue, transmits traffic from the higher priority queue, and then resumes the suspended transmission. At a receiving end, a switch that includes a processor separates the interleaved traffic into output queues for reassembly of individual traffic streams from the data stream.