Abstract:
A channel adaptive iterative turbo decoder for computing with MAP decoders a set of branch metrics for a window of received data, computing the forward and reverse recursive path state metrics and computing from the forward and reverse recursive path state metrics the log likelihood ratio for 1 and 0 and interleaving the decision bits; and identifying those MAP decoder decision bits which are non-convergent, computing a set of branch metrics for the received data, computing from the forward and reverse recursive path state metrics the log likelihood ratio (LLR) for 1 and 0 for each non-converged decision bit and interleaving the non-convergent decision bits.
Abstract:
A method of calculating branch metrics for a butterfly in a trellis of a MAP-genre decoding algorithm, the method comprising providing initialised branch metrics for the transitions in the butterfly and incrementing the branch metrics with a group of data values corresponding to said transitions in accordance with control signals derived from the butterfly index and one or more polynomials describing tap positions of the encoding equipment to whose operation the trellis relates, wherein said group comprises systematic bit and parity bit values.
Abstract:
The invention relates to a turbo decoder which is provided for decoding a data signal (D) transmitted over a disturbed channel and which comprises a symbol estimator (MAP_DEC) and a digital signal processor (DSP). Inside a calculating loop of the iterative turbo decoding, the symbol estimator (MAP_DEC) conducts two symbol estimations and the DSP conducts an interleaving procedure and a deinterleaving procedure. A bi-directional interface (FMI) is provided for the transmission of data between the symbol estimator (MAP_DEC) and the DSP.
Abstract:
propabad and apparatus for reducing memory requirements and increasing speed of decoding of turbo encoded data in a MAP decoder. Turbo coded data is decoded by computing alpha values and saving checkpoint alpha values on a stack. The checkpoint values are then used to recreate the alpha values to be used in computations when needed. By saving only a subset of the Alpha values memory to hold them is conserved. Alpha and beta computations are made using a min* operation which provides a mathematic equivalence for adding logarithmic values without having to convert from the logarithmic domain. To increase the speed of the min* operation logarithmic values are computed assuming that one min* input is larger than the other and visa versa at the same time. The correct value is selected later based on a partial resulta calculation comparing the values accepted for the min* calculation. Additionally calculations are begun without waiting for previous calculations to finish. The computational values are kept to a minimal accuracy to minimize propagation delay. An offset is added to the logarithmic calculations in order to keep the calculations from becoming negative and requiring another bit to represent a sign bit. Circuits that correct for errors in partial results are employed. Normalization circuits which zero alpha and beta most significant bits based on a previous decoder interation are employed to add only minimal time to circuit critical paths.
Abstract:
A method for parallel concatenated (Turbo) encoding and decoding. Turbo encoders receive a sequence of input data tuples and encode them. The input sequence may correspond to a sequence of an original data source, or to an already coded data sequence such as provided by a Reed-Solomon encoder. A turbo encoder generally comprises two or more encoders separated by one or more interleavers. The input data tuples may be interleaved using a modulo scheme in which the interleaving is according to some method (such as block or random interleaving) with the added stipulation that the input tuples may be interleaved only to interleaved positions having the same modulo-N (where N is an integer) as they have in the input data sequence. If all the input tuples are encoded by all encoders then output tuples can be chosen sequentially from encoders and no tuples will be missed. If the input tuples comprise multiple bits, the bits may be interleaved independently to interleaved positions having the same modulo-N and the same bit position. This may improve the robustness of the code. A first encoder ma have no interleaver or all encoders may have interleavers, whether the input tuple bits are interleaved independently or not. Modulo type interleaving also allows decoding in parallel.
Abstract:
The invention relates to a turbo decoder which is provided for decoding a data signal (D) transmitted over a disturbed channel and which comprises a symbol estimator (MAP_DEC) and a digital signal processor (DSP). Inside a calculating loop of the iterative turbo decoding, the symbol estimator (MAP_DEC) conducts two symbol estimations and the DSP conducts an interleaving procedure and a deinterleaving procedure. A bi-directional interface (FMI) is provided for the transmission of data between the symbol estimator (MAP_DEC) and the DSP.
Abstract:
A MAP decoder may be implemented in parallel on the basis of a matrix based description of the MAP algorithm. In one implementation, a device may receive an input array that represents received encoded data (610) and calculate, in parallel, a series of transition matrices from the input array (620). The device may further calculate, in parallel, products of the cumulative products of the series of transition matrices and an initialization vector (630). The device may further calculate, in parallel and based on the products of the cumulative products of the series of transition matrices and the initialization vector, an output array that corresponds to a decoded version of the received encoded data in the input array (640). The caluclations may be based on the so-called scan technique/scan algorithm. The above approach may allow to implement MAP decoding in technical computing envorinments (TCE) or on GPUs.
Abstract:
A memory efficient, accelerated implementation architecture for BCJR based forward error correction algorithms. In this architecture, a memory efficiency storage scheme is adopted for the metrics and channel information to achieve high processing speed with a low memory requirement. Thus, BCJR based algorithms can be accelerated, and the implementation complexity can be 5 reduced. This scheme can be used in the BCJR based turbo decoder and LDPC decoder implementations.
Abstract:
A unified decoder (10) is capable of decoding data encoded with convolutional codes, Turbo codes and LDPC codes. The decoder comprises a first set (20,...,24) and a second set (26,...,3O) of trellis processors for calculating path metrics during decoding of convolutional codes and for calculating alpha and beta metrics during decoding of turbo and LDPC codes. The decoder further comprises a normalization unit for the normalization of metrics (46), a set of reliability calculators, a trace back unit (32) and two alpha-beta swap units (38,40) for the redistribution of the metrics to the trellis processors. In at least one embodiment, a unified decoder is implemented within a multi-standard wireless device.