Abstract:
A method of encoding a video signal of an FGS layer by reordering transform coefficients includes classifying transform coefficients of blocks in a current layer to be encoded into significant coefficients and refinement coefficients, reordering the significant coefficients and the refinement coefficients according to the classifications, and coding the reordered significant coefficients and refinement coefficients.
Abstract:
The present invention relates to a video compression technology, and more particularly, to an effective flag-coding method and apparatus thereof by using a spatial correlation among various flags used to code a video frame. In order to accomplish the object, there is provided an apparatus for encoding a flag used to code a video frame composed of a plurality of blocks, the apparatus including a flag-assembling unit which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of the blocks, a maximum-run-determining unit which determines a maximum run of the flag bit string, and a converting unit which converts the bits included in the flag bit string into a codeword having a size no more than the maximum run by using a predetermined codeword table.
Abstract:
A method and apparatus for encoding and decoding a video signal on a group basis is disclosed, in which blocks constituting a multi-layer video signal are coded. The method includes grouping every two or more blocks having symbols that have a value identical to a predetermined value, generating group-based symbols each indicating information about the grouped blocks, and coding the group-based symbols.
Abstract:
A method and apparatus for efficiently encoding diverse flags being used in a multilayer-based scalable video codec, based on an inter-layer correlation. The encoding method includes judging whether flags of a current layer included in a specified unit area are all equal to flags of a base layer, setting a specified prediction flag according to the result of judgment, and if it is judged that the flags of the current layer are equal to the flags of the base layer, skipping the flags of the current layer and inserting the flags of the base layer and the prediction flag into a bitstream.
Abstract:
Methods and apparatuses for entropy encoding and decoding Fine-granularity Scalability (FGS) layer video data are provided. The encoding method includes extracting residual data between a first block and a second block in a layer lower than the FGS layer corresponding to the first block; obtaining transform coefficients; dividing the transform coefficients in the first block into at least two subblocks; calculating the length of a prefix of first coefficients in the subblocks; and combining the prefix with a suffix used to distinguish the first coefficients; and VLC encoding the first coefficients. The encoding apparatus includes a subblock divider, a prefix generator, and a significant coefficient encoding unit. The decoding method includes calculating a range of a transform coefficient using a length of a prefix of the transform coefficient; extracting a value of the transform coefficient; VLC decoding the value; and combining first and second subblocks having the decoded coefficients.
Abstract:
A method and apparatus for encoding and decoding a video signal according to directional intra-residual prediction. The video encoding method of the present invention includes calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer, calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer, and encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
Abstract:
A method and apparatus for efficiently encoding a plurality of layers using inter-layer information in a multi-layer based video codec are disclosed. The video encoding method includes operations of reading the weighting factors of one layer; performing motion compensation on reference frames for the current frame based on a motion vector; generating a predicted frame for the current frame by acquiring a weighted sum of the motion-compensated reference frames using the read weighting factors; and encoding the difference between the current frame and the predicted frame.
Abstract:
A method of reducing a mismatch between an encoder and a decoder in motion compensated temporal filtering, and a video coding method and apparatus using the same. The video coding method includes dividing input frames into one final low-frequency frame and at least one high-frequency frame by performing motion compensated temporal filtering on the input frames; encoding the final low-frequency frame and decoding the encoded final low-frequency frame; re-estimating the at least one high-frequency frame using the decoded final low-frequency frame; and encoding the re-estimated high-frequency frame.
Abstract:
A method of compressing a motion vector (MV) of a first macroblock when the region of a first lower layer corresponding to the first macroblock of a current layer frame does not have an MV is provided. The method includes interpolating the MV of a second macroblock to which the region belongs, based on the MV of at least one neighboring macroblock, and predicting the MV of the first macroblock using the interpolated MV.
Abstract:
A multi-layered video encoding method is provided wherein motion estimation is performed by using one of two frames of a lower layer temporally closest to an unsynchronized frame of a current layer as a reference frame. A virtual base layer frame at the same temporal location as that of the unsynchronized frame is generated using a motion vector obtained as a result of the motion estimation and the reference frame. The generated virtual base layer frame is subtracted from the unsynchronized frame to generate a difference, and the difference is encoded.