HIGH SPEED TURBO CODES DECODER FOR 3G USING PIPELINED SISO LOG-MAP DECODERS ARCHITECTURE
    31.
    发明申请
    HIGH SPEED TURBO CODES DECODER FOR 3G USING PIPELINED SISO LOG-MAP DECODERS ARCHITECTURE 审中-公开
    用于3G的高速涡轮编码解码器使用管道SISO LOG-MAP解码器架构

    公开(公告)号:WO2004062111A9

    公开(公告)日:2004-08-26

    申请号:PCT/US0335865

    申请日:2003-11-07

    Inventor: NGUYEN QUANG

    Abstract: The invention encompasses several improved Turbo Codes Decoder method and apparatus to provide a more suitable, practical and simpler method for implementing a Turbo Codes Decoder in ASIC or DSP codes. (1) Two Parallel Turbo Codes Decoder blocks (40A & 40B) to compute soft-decoded data RXDa, RXDb from two different received path. (2) Two pipelined Log-MAP decoders (A42 & B44) are used for iterative decoding received data. (3) A Sliding Window of block N data are used on the input memory for pipeline operations. (4) The output Block N Data from the first decoder A are stored in the RAM memory A, and the second decoder B stores output data in the RAM memory B while the decoder B decodes block N data from RAM memory A at the same clock cycle. (5) Log-Map decoders are simpler to implement and are low-power consumption. (6) Pipelined log-Map decoder's architecture provides high-speed data throughout, one output per clock cycle.

    Abstract translation: 本发明包括几种改进的Turbo码解码器方法和装置,以提供用于在ASIC或DSP代码中实现Turbo码解码器的更合适,实用和更简单的方法。 (1)两个并行Turbo码解码器块(40A和40B),用于从两个不同的接收路径计算软解码数据RXDa,RXDb。 (2)两条流水线Log-MAP解码器(A42和B44)用于迭代解码接收数据。 (3)在输入存储器中使用块N数据的滑动窗口进行管道操作。 (4)来自第一解码器A的输出块N数据被存储在RAM存储器A中,并且第二解码器B将输出数据存储在RAM存储器B中,而解码器B在相同的时钟解码来自RAM存储器A的块N数据 周期。 (5)Log-Map解码器实现起来更简单,功耗更低。 (6)流水线对数映射解码器的架构提供高速数据,每个时钟周期有一个输出。

    PARALLELIZED SLIDING-WINDOW MAP DECODING
    32.
    发明申请
    PARALLELIZED SLIDING-WINDOW MAP DECODING 审中-公开
    并行滑动窗口地图解码

    公开(公告)号:WO2004054115A1

    公开(公告)日:2004-06-24

    申请号:PCT/US2003/033584

    申请日:2003-10-23

    Abstract: A method of decoding using a log posterior probability ratio L(u k ), which is a function of forward variable α (.) and backward variable β (.) . The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q , where p plus q equal the length of the code word U . The forward segments α (.) are parallel calculated, and the backward segments ,β (.) are parallel calculated. The ratio L(u k ) is calculated using the parallel calculated segments of α (.) and β (.).

    Abstract translation: 使用对数后验概率比L(uk)进行解码的方法,其是前向变量α(。)和向后变量β(。)的函数。 该方法包括将前向变量α(。)和反向变量β(。)分成例如两个段p和q,其中p + q等于代码字U的长度。前向段α(。) 并行计算,并且计算后向段β(。)。 使用α(。)和β(。)的平行计算段计算比率L(uk)。

    SLIDING-WINDOW DECODER WITH PROLOG-WINDOWS HAVING FLEXIBEL SIZES
    33.
    发明申请
    SLIDING-WINDOW DECODER WITH PROLOG-WINDOWS HAVING FLEXIBEL SIZES 审中-公开
    滑动窗口解码器,带有灵活尺寸的PROLOG-WINDOWS

    公开(公告)号:WO2004038929A1

    公开(公告)日:2004-05-06

    申请号:PCT/IB2003/004200

    申请日:2003-09-22

    CPC classification number: H03M13/3905 H03M13/2957 H03M13/3927 H03M13/41

    Abstract: Sliding-window decoders with processor-systems (1) for decoding streams of symbols run prolog deriving processes (23) for deriving initial parameters for prolog-windows, and run main deriving processes (24) for deriving main parameters for main-windows thereby using initial conditions defined by said initial parameters. By introducing defining processes (22) for defining the prolog-windows having a flexible number of symbols, a flexible size, dependently upon the needed quality of the initial condition, the prolog-window can be made larger/smaller (initial condition with higher/lower quality). As a result, the efficiency is improved, as a consequence of the average overlap between prolog-windows of a certain main-window and a neighboring main-window being reduced. Preferably, per main-window, the prolog-windows get increasing sizes. Based upon the insight of initial conditions needing to have flexible qualities, the basic idea introduces flexible sizes for prolog-windows. Sizes which grow per iteration make the sliding-window decoders even more advantageous.

    Abstract translation: 具有用于解码符号流的处理器系统(1)的滑动窗口解码器运行用于导出前序窗口的初始参数的序言导出过程(23),并且运行用于导出主窗口的主要参数的主导出过程(24),从而使用 由初始参数定义的初始条件。 通过引入用于定义具有灵活数量的符号的规范窗口的定义过程(22),灵活的大小,依赖于初始条件的所需质量,可以使序言窗口更大/更小(初始条件具有较高/ 质量较差)。 结果,由于某个主窗口和相邻主窗口的前缀窗口之间的平均重叠被减少,结果提高了效率。 优选地,在每个主窗口中,序言窗口的尺寸越来越大。 基于需要具有灵活性质的初始条件的基础,基本思想引入了prolog-windows的灵活大小。 每次迭代增长的尺寸使得滑动窗口解码器更加有利。

    COMMUNICATION UNIT AND METHOD OF DECODING
    34.
    发明申请
    COMMUNICATION UNIT AND METHOD OF DECODING 审中-公开
    通信单元和解码方法

    公开(公告)号:WO2004038928A1

    公开(公告)日:2004-05-06

    申请号:PCT/EP2003/011570

    申请日:2003-10-17

    Abstract: A method (600, 800) for turbo decoding one or more data blocks. The method includes the steps of receiving (602, 802) one or more data blocks in a plurality of time slots at a communication unit. At least one Backward Processor computes (625) backward path metrics for a plurality of data slots and stores the backward path metrics in a storage element. A Forward Processor computes (645, 835) forward path metrics for the plurality of data slots. A data block determination function, calculates and outputs (648, 838) decoded data for the data blocks based on the forward path metrics and the stored backward path metrics. By storing backward path metrics in a turbo decoding operation, the data block determination function, for example an a-posteriori probabilities module, calculates and outputs decoded data using reduced storage space when compared to known techniques of storing forward path metrics. There is an advantage in delay over known "sliding window" decoding techniques.

    Abstract translation: 一种用于对一个或多个数据块进行turbo解码的方法(600,800)。 该方法包括在通信单元的多个时隙中接收(602,802)一个或多个数据块的步骤。 至少一个后向处理器计算(625)多个数据时隙的后向路径度量,并将后向路径度量存储在存储元件中。 前向处理器计算多个数据时隙的(645,835)前向路径度量。 数据块确定功能,基于前向路径量度和存储的反向路径度量来计算和输出(648,838)数据块的解码数据。 通过在turbo解码操作中存储反向路径量度,与已知的存储前向路径度量的技术相比,数据块确定功能(例如,后验概率模块)使用减小的存储空间来计算和输出解码数据。 在已知的“滑动窗口”解码技术上延迟有一个优点。

    METHOD AND APPARATUS FOR DECODING OF TURBO ENCODED DATA
    35.
    发明申请
    METHOD AND APPARATUS FOR DECODING OF TURBO ENCODED DATA 审中-公开
    用于解码涡轮编码数据的方法和装置

    公开(公告)号:WO0223739A3

    公开(公告)日:2003-09-25

    申请号:PCT/US0128974

    申请日:2001-09-12

    Abstract: propabad and apparatus for reducing memory requirements and increasing speed of decoding of turbo encoded data in a MAP decoder. Turbo coded data is decoded by computing alpha values and saving checkpoint alpha values on a stack. The checkpoint values are then used to recreate the alpha values to be used in computations when needed. By saving only a subset of the Alpha values memory to hold them is conserved. Alpha and beta computations are made using a min* operation which provides a mathematic equivalence for adding logarithmic values without having to convert from the logarithmic domain. To increase the speed of the min* operation logarithmic values are computed assuming that one min* input is larger than the other and visa versa at the same time. The correct value is selected later based on a partial resulta calculation comparing the values accepted for the min* calculation. Additionally calculations are begun without waiting for previous calculations to finish. The computational values are kept to a minimal accuracy to minimize propagation delay. An offset is added to the logarithmic calculations in order to keep the calculations from becoming negative and requiring another bit to represent a sign bit. Circuits that correct for errors in partial results are employed. Normalization circuits which zero alpha and beta most significant bits based on a previous decoder interation are employed to add only minimal time to circuit critical paths.

    Abstract translation: 推进器和用于减少存储器需求并提高MAP解码器中turbo编码数据解码速度的装置。 Turbo编码数据通过计算alpha值并在堆栈上保存检查点alpha值进行解码。 然后,检查点值用于在需要时重新创建要用于计算的alpha值。 通过仅保存Alpha值的一部分内存来保存它们是保守的。 Alpha和Beta计算使用min *操作进行,该操作为添加对数值提供数学等效性,而无需从对数域转换。 为了提高min *的速度运算,假设一分钟*的输入大于另一个,反之亦然,则计算对数值。 稍后将基于部分结果计算比较最小*计算所接受的值来选择正确的值。 另外计算开始,而不等待以前的计算结束。 计算值保持最小的精度以最小化传播延迟。 将偏移量加到对数计算中,以便使计算不会变为负值,并需要另一个位来表示一个符号位。 采用校正部分结果误差的电路。 采用基于先前的解码器间隔使α和β最高有效位为零的归一化电路仅对电路关键路径增加最小时间。

    PARALLEL CONCATENATED CODE WITH SOFT-IN SOFT-OUT INTERACTIVE TURBO DECODER
    36.
    发明申请
    PARALLEL CONCATENATED CODE WITH SOFT-IN SOFT-OUT INTERACTIVE TURBO DECODER 审中-公开
    带软启动互动式涡轮解码器的并行定义代码

    公开(公告)号:WO0223738A3

    公开(公告)日:2003-05-08

    申请号:PCT/US0128875

    申请日:2001-09-12

    Abstract: A method for parallel concatenated (Turbo) encoding and decoding. Turbo encoders receive a sequence of input data tuples and encode them. The input sequence may correspond to a sequence of an original data source, or to an already coded data sequence such as provided by a Reed-Solomon encoder. A turbo encoder generally comprises two or more encoders separated by one or more interleavers. The input data tuples may be interleaved using a modulo scheme in which the interleaving is according to some method (such as block or random interleaving) with the added stipulation that the input tuples may be interleaved only to interleaved positions having the same modulo-N (where N is an integer) as they have in the input data sequence. If all the input tuples are encoded by all encoders then output tuples can be chosen sequentially from encoders and no tuples will be missed. If the input tuples comprise multiple bits, the bits may be interleaved independently to interleaved positions having the same modulo-N and the same bit position. This may improve the robustness of the code. A first encoder ma have no interleaver or all encoders may have interleavers, whether the input tuple bits are interleaved independently or not. Modulo type interleaving also allows decoding in parallel.

    Abstract translation: 一种并行级联(Turbo)编码和解码的方法。 Turbo编码器接收一系列输入数据元组并进行编码。 输入序列可以对应于原始数据源的序列,或者对应于已由Reed-Solomon编码器提供的已经编码的数据序列。 turbo编码器通常包括由一个或多个交织器分离的两个或更多个编码器。 输入数据元组可以使用其中交织根据某些方法(诸如块或随机交织)的加法规则进行交织,其中输入元组只能交织到具有相同模N的交织位置 其中N是整数),因为它们在输入数据序列中具有。 如果所有输入元组由所有编码器编码,则输出元组可以从编码器顺序选择,并且不会丢失元组。 如果输入元组包含多个比特,那么这些比特可以与具有相同模N和相同比特位置的交织位置独立交织。 这可以提高代码的鲁棒性。 第一编码器没有交织器,或者所有编码器可以具有交织器,无论输入元组位是否独立交错。 模式类型交织也允许并行解码。

    A DECODER FOR TRELLIS-BASED CHANNEL ENCODING
    38.
    发明申请
    A DECODER FOR TRELLIS-BASED CHANNEL ENCODING 审中-公开
    基于TRLLIS的通道编码的解码器

    公开(公告)号:WO02029977A2

    公开(公告)日:2002-04-11

    申请号:PCT/US2001/030355

    申请日:2001-09-26

    Abstract: A system and method for decoding a channel bit stream efficiently performs trellis-based operations. The system includes a butterfly coprocessor and a digital signal processor. For trellis-based encoders, the system decodes a channel bit stream by performing operations in parallel in the butterfly coprocessor, at the direction of the digital signal processor. The operations are used in implementing the MAP algorithm, the Viterbi algorithm, and other soft- or hard-output encoding algorithms. The DSP may perform memory management and algorithmic scheduling on behalf of the butterfly coprocessor. The butterfly coprocessor may perform parallel butterfly operations for increased throughput. The system maintains flexibility, for use in a number of possible decoding environments.

    Abstract translation: 用于对信道比特流进行解码的系统和方法有效地执行基于网格的操作。 该系统包括蝶形协处理器和数字信号处理器。 对于基于网格的编码器,系统通过在蝶形协处理器中在数字信号处理器的方向上并行执行操作来解码通道位流。 这些操作用于实现MAP算法,维特比算法和其他软或硬输出编码算法。 DSP可以代表蝴蝶协处理器执行内存管理和算法调度。 蝶形协处理器可以执行并行蝶形运算以增加吞吐量。 该系统保持灵活性,用于许多可能的解码环境。

Patent Agency Ranking