Abstract:
The invention encompasses several improved Turbo Codes Decoder method and apparatus to provide a more suitable, practical and simpler method for implementing a Turbo Codes Decoder in ASIC or DSP codes. (1) Two Parallel Turbo Codes Decoder blocks (40A & 40B) to compute soft-decoded data RXDa, RXDb from two different received path. (2) Two pipelined Log-MAP decoders (A42 & B44) are used for iterative decoding received data. (3) A Sliding Window of block N data are used on the input memory for pipeline operations. (4) The output Block N Data from the first decoder A are stored in the RAM memory A, and the second decoder B stores output data in the RAM memory B while the decoder B decodes block N data from RAM memory A at the same clock cycle. (5) Log-Map decoders are simpler to implement and are low-power consumption. (6) Pipelined log-Map decoder's architecture provides high-speed data throughout, one output per clock cycle.
Abstract:
A method of decoding using a log posterior probability ratio L(u k ), which is a function of forward variable α (.) and backward variable β (.) . The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q , where p plus q equal the length of the code word U . The forward segments α (.) are parallel calculated, and the backward segments ,β (.) are parallel calculated. The ratio L(u k ) is calculated using the parallel calculated segments of α (.) and β (.).
Abstract:
Sliding-window decoders with processor-systems (1) for decoding streams of symbols run prolog deriving processes (23) for deriving initial parameters for prolog-windows, and run main deriving processes (24) for deriving main parameters for main-windows thereby using initial conditions defined by said initial parameters. By introducing defining processes (22) for defining the prolog-windows having a flexible number of symbols, a flexible size, dependently upon the needed quality of the initial condition, the prolog-window can be made larger/smaller (initial condition with higher/lower quality). As a result, the efficiency is improved, as a consequence of the average overlap between prolog-windows of a certain main-window and a neighboring main-window being reduced. Preferably, per main-window, the prolog-windows get increasing sizes. Based upon the insight of initial conditions needing to have flexible qualities, the basic idea introduces flexible sizes for prolog-windows. Sizes which grow per iteration make the sliding-window decoders even more advantageous.
Abstract:
A method (600, 800) for turbo decoding one or more data blocks. The method includes the steps of receiving (602, 802) one or more data blocks in a plurality of time slots at a communication unit. At least one Backward Processor computes (625) backward path metrics for a plurality of data slots and stores the backward path metrics in a storage element. A Forward Processor computes (645, 835) forward path metrics for the plurality of data slots. A data block determination function, calculates and outputs (648, 838) decoded data for the data blocks based on the forward path metrics and the stored backward path metrics. By storing backward path metrics in a turbo decoding operation, the data block determination function, for example an a-posteriori probabilities module, calculates and outputs decoded data using reduced storage space when compared to known techniques of storing forward path metrics. There is an advantage in delay over known "sliding window" decoding techniques.
Abstract:
propabad and apparatus for reducing memory requirements and increasing speed of decoding of turbo encoded data in a MAP decoder. Turbo coded data is decoded by computing alpha values and saving checkpoint alpha values on a stack. The checkpoint values are then used to recreate the alpha values to be used in computations when needed. By saving only a subset of the Alpha values memory to hold them is conserved. Alpha and beta computations are made using a min* operation which provides a mathematic equivalence for adding logarithmic values without having to convert from the logarithmic domain. To increase the speed of the min* operation logarithmic values are computed assuming that one min* input is larger than the other and visa versa at the same time. The correct value is selected later based on a partial resulta calculation comparing the values accepted for the min* calculation. Additionally calculations are begun without waiting for previous calculations to finish. The computational values are kept to a minimal accuracy to minimize propagation delay. An offset is added to the logarithmic calculations in order to keep the calculations from becoming negative and requiring another bit to represent a sign bit. Circuits that correct for errors in partial results are employed. Normalization circuits which zero alpha and beta most significant bits based on a previous decoder interation are employed to add only minimal time to circuit critical paths.
Abstract:
A method for parallel concatenated (Turbo) encoding and decoding. Turbo encoders receive a sequence of input data tuples and encode them. The input sequence may correspond to a sequence of an original data source, or to an already coded data sequence such as provided by a Reed-Solomon encoder. A turbo encoder generally comprises two or more encoders separated by one or more interleavers. The input data tuples may be interleaved using a modulo scheme in which the interleaving is according to some method (such as block or random interleaving) with the added stipulation that the input tuples may be interleaved only to interleaved positions having the same modulo-N (where N is an integer) as they have in the input data sequence. If all the input tuples are encoded by all encoders then output tuples can be chosen sequentially from encoders and no tuples will be missed. If the input tuples comprise multiple bits, the bits may be interleaved independently to interleaved positions having the same modulo-N and the same bit position. This may improve the robustness of the code. A first encoder ma have no interleaver or all encoders may have interleavers, whether the input tuple bits are interleaved independently or not. Modulo type interleaving also allows decoding in parallel.
Abstract:
The invention relates to a turbo decoder which is provided for decoding a data signal (D) transmitted over a disturbed channel and which comprises a symbol estimator (MAP_DEC). Said estimator comprises a computing means which, by using the knowledge of the fault protection code used on the side of the transmitter, calculates transition metric values, forward and reverse recurrence metric values, and which calculates the output values (LLR) therefrom. The computing means comprise at least one hardware calculating module (RB1/2/3), which is constituted of at least one combinatory logic, for generating at least one type of said values.
Abstract:
A system and method for decoding a channel bit stream efficiently performs trellis-based operations. The system includes a butterfly coprocessor and a digital signal processor. For trellis-based encoders, the system decodes a channel bit stream by performing operations in parallel in the butterfly coprocessor, at the direction of the digital signal processor. The operations are used in implementing the MAP algorithm, the Viterbi algorithm, and other soft- or hard-output encoding algorithms. The DSP may perform memory management and algorithmic scheduling on behalf of the butterfly coprocessor. The butterfly coprocessor may perform parallel butterfly operations for increased throughput. The system maintains flexibility, for use in a number of possible decoding environments.