Abstract:
The present invention relates to 3D video coding device and method. A decoding method, according to the present invention, provides a 3D video decoding method. A decoding method comprises the steps of: obtaining a disparity value on the basis of a reference view and a predetermined value; deriving movement information of a current block in a depth picture on the basis of the disparity value; and generating a prediction sample of the current block on the basis of the movement information, wherein the reference view is a view of a reference picture in a reference picture list. According to the present invention, even when a base view cannot be accessed, a disparity vector can be derived on the basis of an available reference view index in a decoded picture buffer (DPB), and coding efficiency can be enhanced.
Abstract:
The present invention relates to a method for constituting a merge candidate list by using a view synthesis prediction (VSP) and the like in multi-view video coding. The method for constituting the merge candidate list according to the present invention comprises the steps of: determining a prediction mode for a current block; inducing, as a merge candidate, motion information from neighboring blocks of the current block when the prediction mode for the current block is a merge mode or a skip mode; and constituting the merge candidate list by using the motion information of the neighboring blocks and the deep parity information induced from the neighboring blocks of the current block.
Abstract:
The present invention relates to a video signal processing method and device capable of: obtaining a reference view block by using a predetermined motion vector; obtaining the depth value of a reference block which corresponds to the reference view block; obtaining an inter-view motion vector for a current block by using at least one depth value of the reference depth block; and decoding the current block by using the inter-view motion vector.
Abstract:
The present invention relates to a method and an apparatus for coding a video signal, and more specifically, a motion vector between viewpoints is obtained by using a depth value of a depth block, which corresponds to a current texture block, and an illumination difference is compensated. By obtaining the motion vector between the viewpoints by using the depth value of the depth block, which corresponds to the current texture block, and compensating the illumination difference, the present invention can obtain an accurate prediction value of the current texture block and thus increase accuracy in inter-prediction between the viewpoints.
Abstract:
The present invention provides a method for performing a transform, the method comprising the steps of: deriving a row transform set, a column transform set, and a permutation matrix on the basis of a given transform matrix (H) and error tolerance parameter; obtaining a row-column transform (RCT) coefficient on the basis of the row transform set, the column transform set, and the permutation matrix; and performing a quantization and an entropy encoding on the RCT coefficient, wherein the permutation matrix represents a matrix obtained by permutating a row of an identity matrix.
Abstract:
The present invention provides a method for encoding a video signal on the basis of a graph-based lifting transform (GBLT), comprising the steps of: detecting an edge from an intra residual signal; generating a graph on the basis of the detected edge, wherein the graph includes a node and a weight link; acquiring a GBLT coefficient by performing the GBLT for the graph; quantizing the GBLT coefficient; and entropy-encoding the quantized GBLT coefficient, wherein the GBLT includes a partitioning step, a prediction step, and an update step.
Abstract:
The present invention provides a method for encoding a video signal by using a single optimized graph, comprising the steps of: obtaining a residual block; generating graphs from the residual block; generating an optimized graph and an optimized transform by combining the graphs, wherein the graphs are combined on the basis of an optimization step; and performing a transform for the residual block on the basis of the optimized graph and the optimized transform.
Abstract:
The present invention provides a method for decoding a video signal by using a graph-based transform, comprising the steps of: parsing a transform index from the video signal; generating a line graph on the basis of edge information on a target unit; aligning transform vectors for each of segments of the line graph on the basis of a transform type corresponding to the transform index; acquiring a transform kernel by realigning the transform vectors for each of segments of the line graph according to a predetermined condition; and performing an inverse transform for the target unit on the basis of the transform kernel.
Abstract:
The present invention provides a method for decoding a video signal using a graph-based transform including receiving, from the video signal, a transform index for a target block; deriving a graph-based transform kernel corresponding to the transform index, and the graph-based transform kernel is determined based on boundary information, which represents a property of a signal for a block boundary; and decoding the target block based on the graph-based transform kernel.
Abstract:
The present invention provides a method for decoding a video signal. The method includes: obtaining filtering flag information indicating whether to perform filtering for a target unit; obtaining a filter parameter based on the filtering flag information, the filter parameter including at least one of a base filter kernel and a modulation weight; and performing filtering for the target unit using the filter parameter, wherein the filter parameter corresponds to a temporal filter parameter or a spatial filter parameter, and the temporal filter parameter is used to minimize the difference between an original image and a reference image, and the spatial filter parameter is used to minimize the difference between the original image and a reconstructed image.