Abstract:
A method for decoding an image by a decoding device according to the present disclosure comprises the steps of: receiving a bit stream including residual information; deriving a quantized conversion factor of a current block on the basis of the residual information included in the bit stream; deriving a residual sample of the current block on the basis of the quantized conversion factor; and generating a reconstructed picture on the basis of the residual sample of the current block.
Abstract:
A video decoding method performed by a decoding apparatus according to the present disclosure includes deriving one of a plurality of cross-component linear model (CCLM) prediction mode as a CCLM prediction mode of the current chroma block, deriving a sample number of neighboring chroma samples of the current chroma block based on the CCLM prediction mode of the current chroma block, a size of the current chroma block, and a specific value; deriving the neighboring chroma samples of the sample number, calculating CCLM parameters based on the neighboring chroma samples and the down sampled neighboring luma samples, deriving prediction samples for the current chroma block based on the CCLM parameters and the down sampled luma samples and generating reconstructed samples for the current chroma block based on the prediction samples, wherein the specific value is derived as 2.
Abstract:
A video decoding method according to this document includes constructing a most probable mode (MPM) list by deriving MPM candidates for a current block based on a neighboring block adjacent to the current block, deriving an intra prediction mode for the current block based on the MPM list, generating predicted samples by performing prediction for the current block based on the intra prediction mode, and generating a reconstructed picture for the current block based on the predicted samples.
Abstract:
An image decoding method according to the present invention comprises: obtaining, from a bitstream, information relating to the intra prediction type of a current block, information relating to the intra prediction mode of the current block, and residual information of the current block; performing intra prediction based on of the information relating to the intra prediction type and the information relating to the intra prediction mode; performing residual processing based on of the residual information; and reconstructing the current block based on of a result of the intra prediction and a result of the residual processing, wherein when the intra prediction type indicates a particular intra prediction type, the information relating to the intra prediction mode includes an MPM index, and when the intra prediction type indicates the particular intra prediction type, the MPM index is parsed without parsing of an MPM flag from the bitstream.
Abstract:
A prediction method according to the present invention comprises the steps of: deriving a prediction block on the basis of an intra prediction mode; deriving transform coefficients of the prediction block by applying transformation to the prediction block; applying frequency domain filtering to the transform coefficients of the prediction block; and generating a modified prediction block by applying inverse transformation on modified transform coefficients derived through the frequency domain filtering, wherein prediction performance can be improved thereby and the amount of data required for residual coding can be reduced.
Abstract:
According to the present disclosure, a video decoding method performed by a video decoding device includes parsing remaining intra prediction mode information for a current block, deriving neighboring samples of the current block, deriving MPM list including MPM candidates of the current block, deriving an intra prediction mode of the current block based on the remaining intra prediction mode information, wherein the intra prediction mode is one of remaining intra prediction modes excluding the MPM candidates, deriving a prediction sample of the current block based on the intra prediction mode and the neighboring samples, and deriving a reconstructed picture based on the prediction sample, wherein the remaining intra prediction mode information is coded through a truncated binary (TB) binarization process, and wherein a binarization parameter for the TB binarization process is 60.
Abstract:
Disclosed is a method for processing an image based on an intra prediction mode and an apparatus for the same. Particularly, the method may include generating a prediction sample of a sub sampled block in a current block based on an intra prediction mode of the current block; deriving a residual sample of the sub sampled block; reconstructing the sub sampled block by adding the prediction sample to the residual sample; and reconstructing the current block by merging the reconstructed the sub sampled blocks.
Abstract:
The present invention provides a method for processing a video signal. The method includes: determining an optimal collocated picture based on the reference index of at least one of candidate blocks for predicting motion information of a current block; predicting motion information of the current block based on information of a collocated block within the optimal collocated picture; and generating a motion prediction signal based on the predicted motion information.
Abstract:
An intra prediction mode-based image processing method includes: obtaining, on the basis of the intra prediction mode of a current block, a first prediction sample value and a second prediction sample value by using a reference sample neighboring the current block; and generating a prediction sample for the current block by linear interpolation of the first prediction sample value and the second prediction sample value.
Abstract:
The present invention relates to a device and a method for coding a 3D video, a decoding method, according to the present invention, comprising the steps of: receiving, through a first syntax, a camera parameter for switching a depth value into a disparity value; determining whether the camera parameter which applies to a previous slice or picture applies to a current slice or picture; and if the camera parameter applies to a current slice or picture, deriving a disparity value of a current block on the basis of the camera parameter. According to the present invention, slices or pictures of a certain interval may share the same camera parameter, the transmission of overlapping information may be prevented, and coding efficiency may be improved.