Abstract:
Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.
Abstract:
The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.
Abstract:
An inter-prediction method according to the present invention comprises the steps of: deriving motion information of a current block; and generating a prediction block for the current block on the basis of the derived motion information. According to the present invention, computational complexity can be reduced and encoding efficiency can be improved.
Abstract:
Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.
Abstract:
The present invention relates to a method and apparatus for setting a reference picture index of a temporal merging candidate. An inter-picture prediction method using a temporal merging candidate can include the steps of: determining a reference picture index for a current block; and inducing a temporal merging candidate block of the current block and calculating a temporal merging candidate from the temporal merging candidate block, wherein the reference picture index of the temporal merging candidate can be calculated regardless of whether a block other than the current block is decoded. Accordingly, a video processing speed can be increased and video processing complexity can be reduced.
Abstract:
An inter-prediction method according to the present invention comprises the steps of: deriving motion information of a current block; and generating a prediction block for the current block on the basis of the derived motion information. According to the present invention, computational complexity can be reduced and encoding efficiency can be improved.
Abstract:
The present invention relates to a method and apparatus for setting a reference picture index of a temporal merging candidate. An inter-picture prediction method using a temporal merging candidate can include the steps of: determining a reference picture index for a current block; and inducing a temporal merging candidate block of the current block and calculating a temporal merging candidate from the temporal merging candidate block, wherein the reference picture index of the temporal merging candidate can be calculated regardless of whether a block other than the current block is decoded. Accordingly, a video processing speed can be increased and video processing complexity can be reduced.
Abstract:
Provided is a video encoding apparatus, including a signal separator to separate a differential image block into a first domain and a second domain, based on a boundary line included in the differential image block, the differential image block indicating a difference between an original image and a prediction image with respect to the original image, a transform encoder to perform a transform encoding with respect to the first domain using a discrete cosine transform (DCT), a quantization unit to quantize an output of the transform encoding unit in a frequency domain, a space domain quantization unit to quantize the second domain in a space domain, and an entropy encoder to perform an entropy encoding using outputs of the quantization unit and the space domain quantization unit.
Abstract:
Provided is a video encoding apparatus, including a signal separator to separate a differential image block into a first domain and a second domain, based on a boundary line included in the differential image block, the differential image block indicating a difference between an original image and a prediction image with respect to the original image, a transform encoder to perform a transform encoding with respect to the first domain using a discrete cosine transform (DCT), a quantization unit to quantize an output of the transform encoding unit in a frequency domain, a space domain quantization unit to quantize the second domain in a space domain, and an entropy encoder to perform an entropy encoding using outputs of the quantization unit and the space domain quantization unit.
Abstract:
Provided is a video encoding apparatus, including a signal separator to separate a differential image block into a first domain and a second domain, based on a boundary line included in the differential image block, the differential image block indicating a difference between an original image and a prediction image with respect to the original image, a transform encoder to perform a transform encoding with respect to the first domain using a discrete cosine transform (DCT), a quantization unit to quantize an output of the transform encoding unit in a frequency domain, a space domain quantization unit to quantize the second domain in a space domain, and an entropy encoder to perform an entropy encoding using outputs of the quantization unit and the space domain quantization unit.