Abstract:
In one embodiment, an apparatus generally comprises one or more input interfaces for receiving a plurality of flows, a plurality of output interfaces, and a processor operable to identify large flows and select one of the output interfaces for each of the large flows to load-balance the large flows over the output interfaces. The apparatus further includes memory for storing a list of the large flows, a pinning mechanism for pinning the large flows to the selected interfaces, and a load-balance mechanism for selecting one of the output interfaces for each of the remaining flows. A method for local placement of large flows to assist in load-balancing is also disclosed.
Abstract:
In one embodiment, an apparatus cascades groups of serialized data streams through devices, and performs operations based on information communicated therein. A received group of serialized data streams is aligned, but not framed, and forwarded to a next device (e.g., a next stage in a linear or tree cascaded formation of devices). Eliminating the framing and subsequent serialization operations performed on the received group of serialized data streams reduces the latency of communications through the cascaded devices, which can be significant when considered in relation to the high-speed communication rates. The received group of serialized data streams is also framed to create a sequence of data frames for processing (e.g., associative memory lookup operations, controlling multiplexing of received downstream serialized data streams, general or other processing) within the device.
Abstract:
A scheduling method and system for a multi-level class hierarchy are disclosed. The hierarchy includes a root node linked to at least two groups. One of the groups has priority over the other of the groups and comprises at least one high priority queue and at least one low priority queue. The method includes receiving traffic at the root node, directing traffic received at the root node to one of the groups, and directing traffic received at the priority group to one of the high priority and low priority queues. Packets are accepted at the high priority queue or the low priority queue if a specified rate is not exceeded at the high and low priority queues and at least some packets are dropped at the low priority queue if the specified rate is exceeded at the high and low priority queues.
Abstract:
Real-time customer packet traffic is instrumented to determine measured delays between two or more points along a path actually traveled by a packet, such as within or external to one or more packet switching devices. These measurements may include delays within a packet switching device other than the ingress and egress time of a packet. These measured delays can be used to determine whether or not the performance of a packet switching device or network meets desired levels, especially for complying with a Service Level Agreement.
Abstract:
A configuration for a transport node of a telecommunication system comprises a pair of transparent mux/demuxs provided at two sites and connected over a high rate span. The T-Muxs provide continuity of all tribs and maintain a lower bit rate linear or ring system through the higher bit rate span. The lower bit rate linear or ring system operates as if it were directly connected without the higher bit rate midsection. For the forward direction of the traffic, the T-Mux comprises a multi-channel receiver for receiving a the trib signals and providing for each trib signal a trib data signal and a trib OAM&P signal. The data signals are multiplexed into a supercarrier data signal and the OAM&P signals are processed to generate a supercarrier OAM&P signal. A supercarrier transmitter maps the supercarrier data signal and the supercarrier OAM&P signal into a supercarrier signal and transmits same over the high rate span. Reverse operations are effected for the reverse direction of traffic. With this invention, an entire ring system does not have to be upgraded to a higher line rate due to fiber exhaust on a single span. The invention is particularly applicable to SONET OC-48/OC-12/OC-3 linear and ring networks and the high rate span could be an OC-192 line.
Abstract:
A non-transitory computer readable medium includes a token space of a first quantity of memory locations representing a maximum number of tokens of a user and the token space is stored on a compute/storage node. An ID space of a second quantity of memory locations representing a number of active tokens of a user, is also included where the second quantity is less than the first quantity and the ID space being stored on the compute/storage node. Each token is a digital currency with a predetermined fiat currency value. Each token has a token ID formed from a personal ID of the user and a value associated with the memory location in the ID space of the token in order that the token ID is a unique value.
Abstract:
In one embodiment, an apparatus cascades groups of serialized data streams through devices, and performs operations based on information communicated therein. A received group of serialized data streams is aligned, but not framed, and forwarded to a next device (e.g., a next stage in a linear or tree cascaded formation of devices). Eliminating the framing and subsequent serialization operations performed on the received group of serialized data streams reduces the latency of communications through the cascaded devices, which can be significant when considered in relation to the high-speed communication rates. The received group of serialized data streams is also framed to create a sequence of data frames for processing (e.g., associative memory lookup operations, controlling multiplexing of received downstream serialized data streams, general or other processing) within the device.
Abstract:
Disclosed are, inter alia, methods, apparatus, computer-readable media, mechanisms, and means for instrumenting real-time customer packet traffic. These measured delays can be used to determine whether or not the performance of a packet switching device and/or network meets desired levels, especially for complying with a Service Level Agreement.
Abstract:
A configuration for a SONET transport node comprises a pair of transparent mux/demuxs provided at two sites and connected over a high rate span. The T-Muxs provide continuity of all tribs and maintain a lower bit rate linear or ring system through the higher bit rate span. The lower bit rate linear or ring system operates as if it were directly connected without the higher bit rate midsection. For the forward direction of the traffic, the T-Mux comprises a multi-channel receiver for receiving the trib signals and providing for each trib signal a trib SPE and a trib OH. The trib SPEs are multiplexed into a supercarrier SPE and the trib OHs signals are processed to generate a supercarrier OH. A supercarrier transmitter maps the supercarrier SPE and the supercarrier OH into a supercarrier signal and transmits same over the high rate span. Reverse operations are effected for the reverse direction of traffic. With this invention, an entire ring system does not have to be upgraded to a higher line rate due to fiber exhaust on a single span. The invention is particularly applicable to OC-48/OC-12/OC-3 linear and ring networks and the high rate span could be an OC-192 line.
Abstract:
In one embodiment, a hierarchical scheduling system including multiple scheduling layers with layer bypass is used to schedule items (e.g., corresponding to packets). This scheduling of items performed in one embodiment includes: propagating first items through the hierarchical scheduling system and updating scheduling information in each of the plurality of scheduling layers based on said propagated first items as said propagated first items propagate through the plurality of scheduling layers, and bypassing one or more scheduling layers of the plurality of scheduling layers for scheduling bypassing items and updating scheduling information in each of said bypassed one or more scheduling layers based on said bypassing items. In one embodiment, this method is performed by a particular machine. In one embodiment, the operations of propagating first items through the hierarchical scheduling system and bypassing one or more scheduling layers are done in parallel.