Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.
Abstract:
Provided are a video encoding method of adjusting a range of encoded output data to adjust a bit depth during restoring of encoded samples, and a video decoding method of substantially preventing overflow from occurring in output data in operations of a decoding process. The video decoding method includes parsing and restoring quantized transformation coefficients in units of blocks of an image from a received bitstream, restoring transformation coefficients by performing inverse quantization on the quantized transformation coefficients, and restoring samples by performing one-dimensional (1D) inverse transformation and inverse scaling on the quantized transformation coefficients. At least one from among the transformation coefficients and the samples has a predetermined bit depth or less.
Abstract:
A video encoding method and apparatus and a video decoding method and apparatus. In the video encoding method, a first predicted coding unit of a current coding unit that is to be encoded is produced, a second predicted coding unit is produced by changing a value of each pixel of the first predicted coding unit by using each pixel of the first predicted coding unit and at least one neighboring pixel of each pixel, and the difference between the current coding unit and the second predicted coding unit is encoded, thereby improving video prediction efficiency.
Abstract:
A sub-pel-unit image interpolation method using a transformation-based interpolation filter includes, selecting, based on a sub-pel-unit interpolation location in a region supported by a plurality of interpolation filters for generating at least one sub-pel-unit pixel value located between integer-pel-unit pixels, one of a symmetric interpolation filter and an asymmetric interpolation filter from among the plurality of interpolation filters; and using the selected interpolation filter to generate the at least one sub-pel-unit pixel value by interpolating the integer-pel-unit pixels.
Abstract:
Provided are scalable video encoding and decoding methods and apparatuses for compensating for inter-layer prediction errors between different layer images by using sample adaptive offsets (SAOs). The scalable video decoding method includes: obtaining inter-layer SAO use information indicating whether to compensate for prediction errors according to inter-layer prediction between a base layer reconstructed image and an enhancement layer prediction image, and SAO parameters indicating a SAO type of the enhancement layer prediction image and an offset, from a received enhancement layer stream; determining the SAO type of the enhancement layer prediction image and offsets corresponding to the prediction errors classified according to categories, from the obtained SAO parameters; and generating an enhancement layer reconstructed image by using the enhancement layer prediction image compensated by the determined offsets by determining a category of a current sample for each pixel location of the enhancement layer prediction image.
Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
A method and apparatus for determining an intra prediction mode of a coding unit. Candidate intra prediction modes of a chrominance component coding unit, which includes an intra prediction mode of a luminance component coding unit, are determined, and costs of the chrominance component coding unit according to the determined candidate intra prediction modes are compared to determine a minimum cost intra prediction mode to be the intra prediction mode of the chrominance component coding unit.