Abstract:
The present invention relates to a method and apparatus for setting a reference picture index of a temporal merging candidate. An inter-picture prediction method using a temporal merging candidate can include the steps of: determining a reference picture index for a current block; and inducing a temporal merging candidate block of the current block and calculating a temporal merging candidate from the temporal merging candidate block, wherein the reference picture index of the temporal merging candidate can be calculated regardless of whether a block other than the current block is decoded. Accordingly, a video processing speed can be increased and video processing complexity can be reduced.
Abstract:
The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.
Abstract:
A video decoding method according to an embodiment of the present invention may include determining a type of a filter to be applied to a first-layer picture which a second-layer picture as a decoding target refers to; determining a filtering target of the first-layer picture to which the filter is applied; filtering the filtering target based on the type of the filter; and adding the filtered first-layer picture to a second-layer reference picture list. Accordingly, the video decoding method and an apparatus using the same may reduce a prediction error in an upper layer and enhance encoding efficiency.
Abstract:
An inter-prediction method according to the present invention comprises the steps of: deriving motion information of a current block; and generating a prediction block for the current block on the basis of the derived motion information. According to the present invention, computational complexity can be reduced and encoding efficiency can be improved.
Abstract:
A method and a device for encoding/decoding an image are disclosed. The method for decoding an image comprises the steps of: decoding information on a quantization matrix; and restoring the quantization matrix on the basis of the information on the quantization matrix, wherein the information on the quantization matrix includes information indicating a DC value of the quantization matrix and/or information indicating differential values of quantization matrix coefficients.
Abstract:
Provided is a video encoding apparatus, including a signal separator to separate a differential image block into a first domain and a second domain, based on a boundary line included in the differential image block, the differential image block indicating a difference between an original image and a prediction image with respect to the original image, a transform encoder to perform a transform encoding with respect to the first domain using a discrete cosine transform (DCT), a quantization unit to quantize an output of the transform encoding unit in a frequency domain, a space domain quantization unit to quantize the second domain in a space domain, and an entropy encoder to perform an entropy encoding using outputs of the quantization unit and the space domain quantization unit.
Abstract:
A method for decoding an image according to the present invention comprises the steps of: restoring a residual block by performing inverse quantization and inverse transformation for the entropy-decoded residual block; generating a prediction block by performing intra prediction for a current block; and restoring an image by adding the restored residual block to the prediction block, wherein the step of generating the prediction block further comprises a step for generating a final prediction value of a pixel to be predicted, on the basis of a first prediction value of the pixel to be predicted, which is included in the current block, and of a final correction value that is calculated by performing an arithmetic right shift by a binary digit 1 for a two's complement integer representation with respect to an initial correction value of the pixel to be predicted. Thus, the operational complexity during image encoding/decoding can be reduced.
Abstract:
According to the present invention, an image encoding/decoding method comprises the steps of: performing an intra prediction on a current block so as to generate a prediction block; performing filtering on a filtering target pixel in the prediction block on the basis of the intra prediction mode of the current block so as to generate a final prediction block; and generating a reconstructed block on the basis of a reconstructed differential block corresponding to the current block and on the final prediction block. According to the present invention, image encoding/decoding efficiency can be improved.
Abstract:
The present invention relates to video encoding/decoding methods and device, wherein the video encoding method according to the invention comprises the following steps: acquiring information of peripheral blocks; setting the information about a current block based on the information of the peripheral blocks; and encoding the current block based on the set information, wherein the current block and the peripheral blocks may be a CU (coding unit).
Abstract:
The present invention is related to an apparatus and/or a method for image encoding and/or decoding using inter-layer combined intra prediction. The apparatus comprises a reference sample generation module generating a reference sample by using at least one of samples which is included in the reconstructed block neighboring the target block of the higher layer, a sample included in the co-located block of the lower layer corresponding to the target block of the higher layer, a sample included in the co-located block of the lower layer corresponding to the reconstructed block neighboring the target block of the higher layer, and a sample included in a certain specific block of the lower layer; a prediction performance module generating a prediction value for the target block using the reference sample; and a prediction value generation module generating a final prediction value for the prediction target block using the prediction value.