摘要:
A configuration for a transport node of a telecommunication system comprises a pair of transparent mux/demuxs provided at two sites and connected over a high rate span. The T-Muxs provide continuity of all tribs and maintain a lower bit rate linear or ring system through the higher bit rate span. The lower bit rate linear or ring system operates as if it were directly connected without the higher bit rate midsection. For the forward direction of the traffic, the T-Mux comprises a multi-channel receiver for receiving a the trib signals and providing for each trib signal a trib data signal and a trib OAM&P signal. The data signals are multiplexed into a supercarrier data signal and the OAM&P signals are processed to generate a supercarrier OAM&P signal. A supercarrier transmitter maps the supercarrier data signal and the supercarrier OAM&P signal into a supercarrier signal and transmits same over the high rate span. Reverse operations are effected for the reverse direction of traffic. With this invention, an entire ring system does not have to be upgraded to a higher line rate due to fiber exhaust on a single span. The invention is particularly applicable to SONET OC-48/OC-12/OC-3 linear and ring networks and the high rate span could be an OC-192 line.
摘要:
A configuration for a SONET transport node comprises a pair of transparent mux/demuxs provided at two sites and connected over a high rate span. The T-Muxs provide continuity of all tribs and maintain a lower bit rate linear or ring system through the higher bit rate span. The lower bit rate linear or ring system operates as if it were directly connected without the higher bit rate midsection. For the forward direction of the traffic, the T-Mux comprises a multi-channel receiver for receiving the trib signals and providing for each trib signal a trib SPE and a trib OH. The trib SPEs are multiplexed into a supercarrier SPE and the trib OHs signals are processed to generate a supercarrier OH. A supercarrier transmitter maps the supercarrier SPE and the supercarrier OH into a supercarrier signal and transmits same over the high rate span. Reverse operations are effected for the reverse direction of traffic. With this invention, an entire ring system does not have to be upgraded to a higher line rate due to fiber exhaust on a single span. The invention is particularly applicable to OC-48/OC-12/OC-3 linear and ring networks and the high rate span could be an OC-192 line.
摘要:
Architectures for a synchronous transport network of a telecommunications system using transparent transport capabilities are presented. The telecommunications network comprises a pair of transparent multiplexers (TMuxs) connected over a bidirectional high speed span for transparently transporting high rate traffic. Each TMux consolidates traffic from a plurality (I) of linear systems or a plurality of bidirectional self-healing rings, each ring (Ki) having a ring rate Ri and at least two nodes (Ai, Bi). In another configuration, each TMux subtends a plurality of rings, such TMuxes being adapted for connection as ring nodes in a high-speed ring. The upgrades obtained with TMuxes in both the linear and ring configurations provide for per span relief for fiber exhaust where no changes to the existing systems are desired. As well, the bandwidth of an existing system may be increased on a per-span basis or the equipment count may be reduced.
摘要:
In one embodiment, an apparatus generally comprises one or more input interfaces for receiving a plurality of flows, a plurality of output interfaces, and a processor operable to identify large flows and select one of the output interfaces for each of the large flows to load-balance the large flows over the output interfaces. The apparatus further includes memory for storing a list of the large flows, a pinning mechanism for pinning the large flows to the selected interfaces, and a load-balance mechanism for selecting one of the output interfaces for each of the remaining flows. A method for local placement of large flows to assist in load-balancing is also disclosed.
摘要:
In one embodiment, an apparatus cascades groups of serialized data streams through devices, and performs operations based on information communicated therein. A received group of serialized data streams is aligned, but not framed, and forwarded to a next device (e.g., a next stage in a linear or tree cascaded formation of devices). Eliminating the framing and subsequent serialization operations performed on the received group of serialized data streams reduces the latency of communications through the cascaded devices, which can be significant when considered in relation to the high-speed communication rates. The received group of serialized data streams is also framed to create a sequence of data frames for processing (e.g., associative memory lookup operations, controlling multiplexing of received downstream serialized data streams, general or other processing) within the device.
摘要:
A scheduling method and system for a multi-level class hierarchy are disclosed. The hierarchy includes a root node linked to at least two groups. One of the groups has priority over the other of the groups and comprises at least one high priority queue and at least one low priority queue. The method includes receiving traffic at the root node, directing traffic received at the root node to one of the groups, and directing traffic received at the priority group to one of the high priority and low priority queues. Packets are accepted at the high priority queue or the low priority queue if a specified rate is not exceeded at the high and low priority queues and at least some packets are dropped at the low priority queue if the specified rate is exceeded at the high and low priority queues.
摘要:
Real-time customer packet traffic is instrumented to determine measured delays between two or more points along a path actually traveled by a packet, such as within or external to one or more packet switching devices. These measurements may include delays within a packet switching device other than the ingress and egress time of a packet. These measured delays can be used to determine whether or not the performance of a packet switching device or network meets desired levels, especially for complying with a Service Level Agreement.
摘要:
In one embodiment, a hierarchical scheduling system including multiple scheduling layers with layer bypass is used to schedule items (e.g., corresponding to packets). This scheduling of items performed in one embodiment includes: propagating first items through the hierarchical scheduling system and updating scheduling information in each of the plurality of scheduling layers based on said propagated first items as said propagated first items propagate through the plurality of scheduling layers, and bypassing one or more scheduling layers of the plurality of scheduling layers for scheduling bypassing items and updating scheduling information in each of said bypassed one or more scheduling layers based on said bypassing items. In one embodiment, this method is performed by a particular machine. In one embodiment, the operations of propagating first items through the hierarchical scheduling system and bypassing one or more scheduling layers are done in parallel.
摘要:
In one embodiment, packet memory and resource memory of a memory are independently managed, with regions of packet memory being freed of packets and temporarily made available to resource memory. In one embodiment, packet memory regions are dynamically made available to resource memory so that in-service system upgrade (ISSU) of a packet switching device can be performed without having to statically allocate (as per prior systems) twice the memory space required by resource memory during normal packet processing operations. One embodiment dynamically collects fragments of packet memory stored in packet memory to form a contiguous region of memory that can be used by resource memory in a memory system that is shared between many clients in a routing complex. One embodiment assigns a contiguous region no longer used by packet memory to resource memory, and from resource memory to packet memory, dynamically without packet loss or pause.
摘要:
In one embodiment, a hierarchical scheduling system including multiple scheduling layers with layer bypass is used to schedule items (e.g., corresponding to packets). This scheduling of items performed in one embodiment includes: propagating first items through the hierarchical scheduling system and updating scheduling information in each of the plurality of scheduling layers based on said propagated first items as said propagated first items propagate through the plurality of scheduling layers, and bypassing one or more scheduling layers of the plurality of scheduling layers for scheduling bypassing items and updating scheduling information in each of said bypassed one or more scheduling layers based on said bypassing items. In one embodiment, this method is performed by a particular machine. In one embodiment, the operations of propagating first items through the hierarchical scheduling system and bypassing one or more scheduling layers are done in parallel.