Abstract:
Provided is a video decoding method including determining a displacement vector per unit time of pixels of a current block in a horizontal direction or a vertical direction, the pixels including a pixel adjacent to an inside of a boundary of the current block, by using values about reference pixels included in a first reference block and a second reference block, without using a stored value about a pixel located outside boundaries of the first reference block and the second reference block; and obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation on the current block by using a gradient value in the horizontal direction or the vertical direction of a first corresponding reference pixel in the first reference block which corresponds to a current pixel included in a current pixel group in the current block, a gradient value in the horizontal direction or the vertical direction of a second corresponding reference pixel in the second reference block which corresponds to the current pixel, a pixel value of the first corresponding reference pixel, a pixel value of the second corresponding reference pixel, and a displacement vector per unit time of the current pixel in the horizontal direction or the vertical direction. In this regard, the current pixel group may include at least one pixel.
Abstract:
Provided are a video encoding method and apparatus to which an interpolation filter is applied according to characteristics of an image for motion compensation, and a video decoding method and apparatus corresponding thereto. The video encoding method according to various embodiments includes determining a degree of change between neighboring samples of at least one integer pixel unit adjacent to a reference sample of an integer pixel unit of a current sample and the reference sample; determining an interpolation filter among interpolation filters having different frequency passbands and configured to produce reference samples of a sub-pixel unit to predict the current sample, based on the degree of change; determining a predicted sample value of the current sample by using a reference sample of a sub-pixel unit produced by applying the determined interpolation filter; and encoding a residual value between the predicted sample value and a sample value of the current sample.
Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
Disclosed are a video encoding method and apparatus and a video decoding method and apparatus. The method of encoding video includes: producing a first predicted coding unit of a current coding unit, which is to be encoded; determining whether the current coding unit comprises a portion located outside a boundary of a current picture; and producing a second predicted coding unit is produced by changing a value of pixels of the first predicted coding unit by using the pixels of the first predicted coding unit and neighboring pixels of the pixels when the current coding unit does not include a portion located outside a boundary of the current picture. Accordingly, a residual block that is the difference between the current encoding unit and the second predicted encoding unit, can be encoded, thereby improving video prediction efficiency.
Abstract:
A video encoding method and apparatus, and a video decoding method and apparatus for generating a reconstructed image having a minimized error between an original image and the reconstructed image. The video decoding method accompanied by a sample adaptive offset (SAO) adjustment, the method includes: obtaining 5 slice SAO parameters with respect to a current slice from a slice header of a received bitstream; obtaining luma SAO use information for a luma component of the current slice and chroma SAO use information for chroma components thereof from among the slice SAO parameters; determining whether to perform a SAO operation on the luma component of 10 the current slice based on the obtained luma SAO use information; and equally determining whether to perform the SAO adjustment on a first chroma component and a second chroma component of the current slice based on the obtained chroma SAO use information.
Abstract:
Provided is a method of determining an up-sampling filter to accurately interpolate a sample value for each sampling position according to an up-sampling ratio for scalable video encoding and decoding. An up-sampling method for scalable video encoding includes determining a phase shift between a pixel of a low resolution image and a pixel of a high resolution image based on a scaling factor between the high resolution image and the low resolution image; selecting at least one filter coefficient set corresponding to the determined phase shift from filter coefficient data comprising filter coefficient sets corresponding to phase shifts; generating the high resolution image by performing filtering on the low resolution image by using the selected at least one filter coefficient set; and generating an improvement layer bitstream comprising high resolution encoding information generated by performing encoding on the high resolution image and up-sampling filter information indicating the determined phase shift.
Abstract:
Provided are a video encoding method of adjusting a range of encoded output data to adjust a bit depth during restoring of encoded samples, and a video decoding method of substantially preventing overflow from occurring in output data in operations of a decoding process. The video decoding method includes parsing and restoring quantized transformation coefficients in units of blocks of an image from a received bitstream, restoring transformation coefficients by performing inverse quantization on the quantized transformation coefficients, and restoring samples by performing one-dimensional (1D) inverse transformation and inverse scaling on the quantized transformation coefficients. At least one from among the transformation coefficients and the samples has a predetermined bit depth or less.
Abstract:
A video decoding method includes extracting offset mergence information of a current largest coding unit (LCU), the offset mergence information indicating whether to adopt a second offset parameter as a first offset parameter of the current LCU; reconstructing the first offset parameter of the current LCU based on the offset mergence information, the first offset parameter including an offset type, an offset value, and an offset class of the current LCU; determining whether the current LCU is an edge type or a band type, based on the offset type; determining an edge direction according to the edge type or a band range according to the band type, based on the offset class; determining a difference value between reconstructed pixels and original pixels included in the offset class, based on the offset value; and adjusting pixel values of reconstructed pixels based on the difference value.
Abstract:
A video decoding method includes extracting offset mergence information of a current largest coding unit (LCU), the offset mergence information indicating whether to adopt a second offset parameter as a first offset parameter of the current LCU; reconstructing the first offset parameter of the current LCU based on the offset mergence information, the first offset parameter including an offset type, an offset value, and an offset class of the current LCU; determining whether the current LCU is an edge type or a band type, based on the offset type; determining an edge direction according to the edge type or a band range according to the band type, based on the offset class; determining a difference value between reconstructed pixels and original pixels included in the offset class, based on the offset value; and adjusting pixel values of reconstructed pixels based on the difference value.
Abstract:
Provided are a video encoding method of adjusting a range of encoded output data to adjust a bit depth during restoring of encoded samples, and a video decoding method of substantially preventing overflow from occurring in output data in operations of a decoding process. The video decoding method includes parsing and restoring quantized transformation coefficients in units of blocks of an image from a received bitstream, restoring transformation coefficients by performing inverse quantization on the quantized transformation coefficients, and restoring samples by performing one-dimensional (1D) inverse transformation and inverse scaling on the quantized transformation coefficients. At least one from among the transformation coefficients and the samples has a predetermined bit depth or less.