Abstract:
Tasks are dynamically allocated to process packets. In particular, packets of data to be processed are assigned a packet identification. The packet identification includes a lane and a packet sequence number. The term “lane” as used herein refers to a port number and a direction (i.e. ingress or egress), such as Port 3 Egress. A set of resources (e.g., registers and memory buffers) are associated with each lane. The task is allowed to access resources associated with the lane. In some embodiments, a task may change the port that it services and use the resources associated with that port.
Abstract:
A system and method are provided for tolerating data line faults in a packet communications network. The method comprises: serially transmitting information packets from at least one traffic manager (TM); at a switch fabric, accepting information packets at a plurality of ingress ports, the information packets addressing destination port card egress ports; selectively connecting port card ingress ports to port card egress ports; serially supplying information packets from a plurality of port card egress ports; sensing a connection fault between the switch fabric and the TM; and, in response to sensing the fault, reselecting connections between the switch fabric port card ports and the TM. Some aspects comprise: an ingress memory subsystem (iMS) receiving cells on an ingress port exceeding an error threshold. Then, reselecting connections between the port card ports and the TM includes the iMS sending a message to the iTM identifying the faulty ingress connection.
Abstract:
A system and modulation method are provided for reducing jitter in the mapping of information into Synchronous Payload Envelopes (SPEs), in a data tributary mapping system. The method comprises buffering data from a plurality of tributaries, and generating buffer-fill information responsive to the buffered data being written and read. The buffer-fill information is filtered, producing rate control information. The rate control information is modulated, and the modulated rate control information is used in controlling the mapping of buffered tributaries into a SPE. The rate control information can be modulated with periodic signals, such as a sine or square wave, and pseudorandom signals with an average value of about zero.
Abstract:
An architecture for a high bandwidth digital cross-connect switching system that is internally non-blocking, has a simpler layout, and employs a reduced number of logic gates. The high bandwidth digital cross-connect switching architecture comprises a Time Division Multiplexing (TDM) cross-connect including M space/time switches. Each space/time switch includes an input bus, an output bus, N×W Flip-Flops (FFs) for storing input data, W N-by-N switches for sorting the data according to predetermined cross-connection requirements, and N×W FFs for storing output data, in which “N” corresponds to the number of input ports and the number of output ports in the N-by-N switch, and “W” corresponds to the width of each data word. Each N-by-N switch includes N×W N-to-1 selectors, and the M space/time switches include N×W M-to-1 selectors, thereby allowing an effective N×M-to-1 selection to be performed on the data words.
Abstract:
A disk array controller reliably detects disk drive power-on-reset events that may cause a disk drive that has uncommitted write data stored in its cache to lose such data. The methods for detecting the power-on-reset events include operating the disk drives in an ATA security mode in which a power-on-reset of a disk drive will cause the drive to enter a locked state in which data transfer commands are aborted; and tracking power cycle count attributes of the disk drives over time. When a disk drive power-on-reset event is detected, the disk array may be efficiently restored to an operational state by re-executing or “replaying” a set of write commands that are cached within the disk array controller. The invention is also applicable to single-disk-drive storage systems.
Abstract:
Whenever a resource being modeled is accessed, an indication about the access is stored in a number of memory locations of a corresponding number of applications that are interested in monitoring the resource. The memory locations (also called “monitoring memory locations”) are individually identified for each application when allocating a location in main memory. At this time, a pointer to the monitoring memory location is supplied to the application and also added to a group of pointers of locations to be updated when accessing the resource. In addition, in certain embodiments, a bit is allocated within a bitmap for each monitoring memory location for any given application. Such a bit is set at the time of updating the corresponding monitoring memory location and cleared when the application reads the monitoring memory location. Just checking the bitmap as a whole can inform an application if there is any change in any monitoring memory locations of that application. Moreover, the application may use individual bits of the bitmap to identify (and cycle through) only those monitoring memory locations that have changed. Update of monitoring memory locations may be implemented via overloading of operators that access each resource.
Abstract:
A digital delay lock loop (DLL) circuit for receiving parallel data and clock signals, deserializing the high speed parallel data to low speed data, and for improving setup and hold times. A DLL circuit for an N-bit datapath, includes a clock DLL configured to provide a clock signal pulse within an eye opening of each of N data signals. The DLL circuit further includes N data DLLs, each being configured to adjust a delay of a data signal to substantially center the eye opening of the data signal on the clock signal pulse.
Abstract:
A system and method for half-rate phase detecting are provided. The method comprises: receiving binary data; dividing the data by two; latching the divided data with a first half-rate clock, creating Q1; latching the divided data with a second half-rate clock, the inverse of the first clock, creating Q2; latching Q1 with the second clock, creating Q3; latching Q2 with the first clock, creating Q4; XORing Q1 and Q2 to create phase signals; and, XORing Q3 and Q4 to create reference signals, corresponding to the phase signals. In some aspects of the method, dividing the stream of data by two introduces a processing delay into the divided data. Then, the method further comprises: in response to the phase and reference signals, phase-locking a voltage controlled oscillator to generate the first and second clocks; delaying the received stream of binary data; and, using the first and second clocks to sample the delayed binary data.
Abstract:
A method of synchronizing or initiating channel lock in a serial loop formed by an initializing transceiver and subject transceivers disclosed. Should a transceiver in the serial loop detect that its receiving serial channel is desynchronized, it sends an unlock signal to the next transceiver in the loop. The unlock signal guarantees that the next transceiver's receiving serial channel will be desynchronized. Only the initializing transceiver may initiate a channel lock sequence.
Abstract:
A system and method are provided for transporting backward information in a digital wrapper format network of connected simplex devices. The system comprises a first simplex processor receiving downstream messages with overhead bytes. The first simplex processor selectively replaces overhead bytes with calculated overhead bytes and supplies the calculated overhead bytes. The system further comprises a buffer receiving the calculated overhead bytes from the first simplex processor and supplying the calculated overhead bytes. A second simplex processor accepts the calculated overhead bytes from the buffer and supplies an upstream message including the calculated overhead bytes. The first simplex processor receives messages in a frame format with an overhead section, drops the overhead section, and selectively reads backward message monitor bytes in the dropped overhead section to determine if upstream communication nodes are receiving transmitted messages.