Abstract:
Provided are a video encoding method of adjusting a range of encoded output data to adjust a bit depth during restoring of encoded samples, and a video decoding method of substantially preventing overflow from occurring in output data in operations of a decoding process. The video decoding method includes parsing and restoring quantized transformation coefficients in units of blocks of an image from a received bitstream, restoring transformation coefficients by performing inverse quantization on the quantized transformation coefficients, and restoring samples by performing one-dimensional (1D) inverse transformation and inverse scaling on the quantized transformation coefficients. At least one from among the transformation coefficients and the samples has a predetermined bit depth or less.
Abstract:
A video encoding method and apparatus and a video decoding method and apparatus. In the video encoding method, a first predicted coding unit of a current coding unit that is to be encoded is produced, a second predicted coding unit is produced by changing a value of each pixel of the first predicted coding unit by using each pixel of the first predicted coding unit and at least one neighboring pixel of each pixel, and the difference between the current coding unit and the second predicted coding unit is encoded, thereby improving video prediction efficiency.
Abstract:
A sub-pel-unit image interpolation method using a transformation-based interpolation filter includes, selecting, based on a sub-pel-unit interpolation location in a region supported by a plurality of interpolation filters for generating at least one sub-pel-unit pixel value located between integer-pel-unit pixels, one of a symmetric interpolation filter and an asymmetric interpolation filter from among the plurality of interpolation filters; and using the selected interpolation filter to generate the at least one sub-pel-unit pixel value by interpolating the integer-pel-unit pixels.
Abstract:
Provided are scalable video encoding and decoding methods and apparatuses for compensating for inter-layer prediction errors between different layer images by using sample adaptive offsets (SAOs). The scalable video decoding method includes: obtaining inter-layer SAO use information indicating whether to compensate for prediction errors according to inter-layer prediction between a base layer reconstructed image and an enhancement layer prediction image, and SAO parameters indicating a SAO type of the enhancement layer prediction image and an offset, from a received enhancement layer stream; determining the SAO type of the enhancement layer prediction image and offsets corresponding to the prediction errors classified according to categories, from the obtained SAO parameters; and generating an enhancement layer reconstructed image by using the enhancement layer prediction image compensated by the determined offsets by determining a category of a current sample for each pixel location of the enhancement layer prediction image.
Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
Provided is a video decoding method including determining a displacement vector per unit time of pixels of a current block in a horizontal direction or a vertical direction, the pixels including a pixel adjacent to an inside of a boundary of the current block, by using values about reference pixels included in a first reference block and a second reference block, without using a stored value about a pixel located outside boundaries of the first reference block and the second reference block; and obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation on the current block by using a gradient value in the horizontal direction or the vertical direction of a first corresponding reference pixel in the first reference block which corresponds to a current pixel included in a current pixel group in the current block, a gradient value in the horizontal direction or the vertical direction of a second corresponding reference pixel in the second reference block which corresponds to the current pixel, a pixel value of the first corresponding reference pixel, a pixel value of the second corresponding reference pixel, and a displacement vector per unit time of the current pixel in the horizontal direction or the vertical direction. In this regard, the current pixel group may include at least one pixel.
Abstract:
A video decoding and video encoding method of performing inter prediction in a bi-directional motion prediction mode, in which a prediction pixel value of a current block may be generated by not only using a pixel value of a first reference block of a first reference picture and a pixel value of a second reference block of a second reference picture, but also using a first gradient value of the first reference block and a second gradient value of the second reference block, in a bi-directional motion prediction mode. Accordingly, encoding and decoding efficiency may be increased since a prediction block similar to an original block may be generated.
Abstract:
Provided is a video decoding method including: obtaining a first motion vector indicating a first reference block of a current block in a first reference picture and a second motion vector indicating a second reference block of the current block in a second reference picture; obtaining a parameter related to pixel group unit motion compensation of the current block, based on at least one of information of the parameter related to the pixel group unit motion compensation and a parameter related to an image including the current picture; generating a prediction block by performing, with respect to the current block, block unit motion compensation based on the first motion vector and the second motion vector and performing the pixel group unit motion compensation based on the parameter related to the pixel group unit motion compensation; and reconstructing the current block. Here, a pixel group may include at least one pixel.
Abstract:
An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.
Abstract:
Provided is a video decoding method including: obtaining a first motion vector indicating a first reference block of a current block in a first reference picture and a second motion vector indicating a second reference block of the current block in a second reference picture; obtaining a parameter related to pixel group unit motion compensation of the current block, based on at least one of information of the parameter related to the pixel group unit motion compensation and a parameter related to an image including the current picture; generating a prediction block by performing, with respect to the current block, block unit motion compensation based on the first motion vector and the second motion vector and performing the pixel group unit motion compensation based on the parameter related to the pixel group unit motion compensation; and reconstructing the current block. Here, a pixel group may include at least one pixel.