Abstract:
According to the present invention, an image decoding method performed by a decoding device comprises the steps of: generating a first prediction block and a second prediction block of a current block; selecting a prediction block, to which a Wiener filter is to be applied, among the first prediction block and the second prediction block; deriving Wiener filter coefficients of the selected prediction block based on the first prediction block and the second prediction block; filtering the selected prediction block based on the derived Wiener filter coefficients; and generating a reconstructed block of the current block based on the filtered prediction block. According to the present invention, the overall coding efficiency can be improved by minimizing the difference between prediction blocks.
Abstract:
An inter prediction method performed by a decoding apparatus according to the present invention comprises the steps of: receiving information associated with an MVD through a bit stream; deriving a candidate motion information list on the basis of a neighboring block of a current block; deriving an MVP of the current block on the basis of the candidate motion information list; deriving a motion vector of the current block on the basis of the MVP and the MVD; and generating a prediction sample with respect to the current block on the basis of the motion vector. According to the present invention, a motion vector may be derived on the basis of a candidate motion information list derived on the basis of a neighboring block. Therefore, the amount of data of prediction mode information can be reduced and inter prediction accuracy and overall coding efficiency can be improved.
Abstract:
Disclosed are a method for encoding/decoding an image and a device therefor. Specifically, a method whereby a decoding device decodes an image comprises: a step of parsing decoding order information for indicating the location of a next block to be decoded after a current block; and a step of determining the next block to be decoded after the current block, on the basis of the decoding order information, wherein the decoding order information indicates the relative location of the next block on the basis of the current block, and the next block can be selected as a block among predefined candidate blocks which can be decoded after the current block.
Abstract:
The present invention provides a method for encoding a video signal comprising the steps of: obtaining activity information from the video signal, wherein the activity signal indicates information relating to edge characteristics of an image and includes edge orientation information and/or edge level information; determining conditionally non-linear transform (CNT) configuration information on the basis of the activity information; and performing CNT prediction coding on the basis of the CNT configuration information, wherein the CNT prediction coding involves performing prediction using all previously decoded pixel values.
Abstract:
An intra prediction mode-based image processing method includes: obtaining, on the basis of the intra prediction mode of a current block, a first prediction sample value and a second prediction sample value by using a reference sample neighboring the current block; and generating a prediction sample for the current block by linear interpolation of the first prediction sample value and the second prediction sample value.
Abstract:
A picture filtering method performed by an encoding device, according to the present invention, comprises the steps of: deriving, from a current picture, regions for adaptive loop filtering (ALF); deciding ALF coefficients by picture unit or region unit; on the basis of the ALF coefficients, determining, by region unit, whether ALF is usable; re-deciding ALF coefficients for an ALF-usable first region; deriving a filter shape for the first region; on the basis of the re-decided ALF coefficients, determining, by unit of coding units (CUs) in the first region, whether ALF is usable; performing filtering for ALF-usable CUs on the basis of the derived filter shape and the re-decided ALF coefficients; and transmitting at least one among information on the ALF-usable first region and information on the ALF-usable CUs. According to the present invention, efficient filtering appropriate for image properties per region may be applied.
Abstract:
The present invention relates to a device and a method for coding a 3D video, a decoding method, according to the present invention, comprising the steps of: receiving, through a first syntax, a camera parameter for switching a depth value into a disparity value; determining whether the camera parameter which applies to a previous slice or picture applies to a current slice or picture; and if the camera parameter applies to a current slice or picture, deriving a disparity value of a current block on the basis of the camera parameter. According to the present invention, slices or pictures of a certain interval may share the same camera parameter, the transmission of overlapping information may be prevented, and coding efficiency may be improved.
Abstract:
The present invention provides a method for coding and decoding a video comprising a multi-view. The method for coding a video, according to one embodiment of the present invention, comprises the steps of: determining whether a current block in a current view is to perform a residual prediction; inducing a first reference block and a second reference block used for the residual prediction of the current block, when the current block performs the residual prediction; generating a residual prediction sample value of the current block, based on a difference value between a sample value of the first reference block and a sample value of the second reference block; and inducing a prediction sample value of the current block by using the residual prediction sample value of the current block.
Abstract:
A method for processing a video signal according to the present invention comprises the steps of: searching for a reference view motion vector corresponding to a disparity vector of a current texture block, a motion vector of a spatial neighbor block of the current texture block, a disparity vector of the current texture block, a view synthesis prediction disparity vector of the current texture block, and a motion vector of a temporal neighbor block of the current texture block, in a predetermined sequence; storing the searched motion vectors in a candidate list, in the predetermined sequence; and performing an inter-prediction on the current texture block, using one among the motion vectors stored in the candidate list, wherein the candidate list stores a predetermined number of motion vectors, and the predetermined sequence is set such that the view synthesis prediction disparity vector is always stored.
Abstract:
An image decoding method includes: deriving an L0 motion vector and a L1 motion vector of the current block; deriving prediction samples of the current block based on the L0 motion vector and the L1 motion vector; and generating reconstructed samples of the current block based on the prediction samples. Deriving the prediction samples includes applying bi-directional optical flow (BDOF) to the current block based on whether the condition for applying BDOF to the current block is satisfied, and the application condition of the BDOF includes a condition whereby the values of L0 luma weighted prediction flag information and L1 luma weighted prediction flag information are both zero, where the a value of each of the L0 luma weighted prediction flag information and L1 luma weighted prediction flag information being 0 represents that a weight factor for each of a L0 and L1 prediction luma components does not exist, respectively.