Abstract:
Provided is a video decoding method including: obtaining a first motion vector indicating a first reference block of a current block in a first reference picture and a second motion vector indicating a second reference block of the current block in a second reference picture; obtaining a parameter related to pixel group unit motion compensation of the current block, based on at least one of information of the parameter related to the pixel group unit motion compensation and a parameter related to an image including the current picture; generating a prediction block by performing, with respect to the current block, block unit motion compensation based on the first motion vector and the second motion vector and performing the pixel group unit motion compensation based on the parameter related to the pixel group unit motion compensation; and reconstructing the current block. Here, a pixel group may include at least one pixel.
Abstract:
A video decoding and video encoding method of performing inter prediction in a bi-directional motion prediction mode, in which a prediction pixel value of a current block may be generated by not only using a pixel value of a first reference block of a first reference picture and a pixel value of a second reference block of a second reference picture, but also using a first gradient value of the first reference block and a second gradient value of the second reference block, in a bi-directional motion prediction mode. Accordingly, encoding and decoding efficiency may be increased since a prediction block similar to an original block may be generated.
Abstract:
Provided is a video decoding method including: determining an inter prediction mode of a current block when the current block is inter-predicted; determining at least one reference sample location to be referred to by the current block, based on the inter prediction mode of the current block; determining filter information to be applied to at least one reconstructed reference sample corresponding to the at least one reference sample location, based on the inter prediction mode of the current block; performing filtering on the at least one reconstructed reference sample, based on the filter information; and decoding the current block by using prediction samples generated via the filtering.
Abstract:
Provided is a video decoding method including: obtaining motion prediction mode information regarding a current block in a current picture; when a bi-directional motion prediction mode is indicated, obtaining a first motion vector and a second motion vector indicating a first reference block and a second reference block of the current block in a first reference picture and second reference picture, respectively; generating a pixel value of a first pixel of the first reference block indicated by the first motion vector and a pixel value of a second pixel of the second reference block indicated by the second motion vector by applying an interpolation filter to a first neighboring region of the first pixel and a second neighboring region of the second pixel; generating gradient values of the first pixel and the second pixel by applying a filter to the first neighboring region and the second neighboring region; and generating a prediction pixel value of the current block by using the pixel values and the gradient values of the first pixel and the second pixel.
Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
Provided is a video decoding method including obtaining, from a bitstream, upsampling phase set information indicating whether a phase of samples comprised in a current layer is adjusted; when the phase is adjusted according to the upsampling phase set information, obtaining a luma vertical phase difference, a luma horizontal phase difference, a chroma vertical phase difference, and a chroma horizontal phase difference from the bitstream; and determining a prediction picture of the current layer by upsampling a reference layer based on the luma vertical phase difference, the luma horizontal phase difference, the chroma vertical phase difference, and the chroma horizontal phase difference, wherein a phase of luma samples comprised in the prediction picture is adjusted according to the luma vertical phase difference and the luma horizontal phase difference, a phase of chroma samples comprised in the prediction picture is adjusted according to the chroma vertical phase difference and the chroma horizontal phase difference, and the luma vertical phase difference and the chroma vertical phase difference are determined according to a scanning scheme with respect to the reference layer.
Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
A video decoding method includes receiving information about whether to correct a chroma sample, obtaining a correction value determined using a luma value in a range corresponding to a position of a determined chroma pixel, based on the received information, and correcting a chroma value using the obtained correction value.
Abstract:
Provided are scalable video encoding and decoding methods by using a de-noise filtering. The scalable video decoding method includes generating reconstructed base layer images from a base layer image stream; determining a reference picture list including at least one of a reconstructed base layer image corresponding to a current enhancement layer image, from among the reconstructed base layer images, and a de-noise reconstructed base layer image, from an enhancement layer image stream; and reconstructing the current enhancement layer image by using a reference image that is determined from the reference picture list.
Abstract:
Provided are a method of interpolating an image by determining interpolation filter coefficients, and an apparatus for performing the same. The method includes: differently selecting an interpolation filter, from among interpolation filters for generating at least one sub-pel-unit pixel value located between integer-pel-unit pixels, based on a sub-pel-unit interpolation location and a smoothness; and generating the at least one sub-pel-unit pixel value by interpolating, using the selected interpolation filter, pixel values of the integer-pel-unit pixels.