Abstract:
Provided is a video decoding method including: obtaining correlation information of a luma value and a chroma value from a most probable chroma (MPC) mode reference region of a current chroma block; determining a prediction value of a chroma sample of the current chroma block from luma samples of a current luma block corresponding to the current chroma block, according to the correlation information; and decoding the current chroma block based on the prediction value of the chroma sample.
Abstract:
A video encoding method and apparatus, and a video decoding method and apparatus for generating a reconstructed image having a minimized error between an original image and the reconstructed image. The video decoding method accompanied by a sample adaptive offset (SAO) adjustment, the method includes: obtaining 5 slice SAO parameters with respect to a current slice from a slice header of a received bitstream; obtaining luma SAO use information for a luma component of the current slice and chroma SAO use information for chroma components thereof from among the slice SAO parameters; determining whether to perform a SAO operation on the luma component of 10 the current slice based on the obtained luma SAO use information; and equally determining whether to perform the SAO adjustment on a first chroma component and a second chroma component of the current slice based on the obtained chroma SAO use information.
Abstract:
A video decoding apparatus includes: a splitter configured to split an image into at least one block; a predictor configured to predict a current sample by using at least one of a value obtained by applying a first weight to a first sample predicted earlier than the current sample in a current block and being adjacent to the current sample in a horizontal direction and a value obtained by applying a first weight to a second sample predicted earlier than the current sample in the current block and being adjacent to the current sample in a vertical direction; and a decoder configured to decode the image by using a residual value of the current sample obtained from a bitstream and a prediction value of the current sample.
Abstract:
Provided is a video decoding method including: acquiring offset type information of a current block; determining a neighboring sample of a current reconstruction sample of the current block according to an edge direction when the offset type information of the current block indicates an edge type; determining an offset category of the current reconstruction sample based on a sample value gradient between a sample value of the current reconstruction sample and a sample value of the neighboring sample and a difference in the sample value gradient; and applying, from among offsets acquired from a bitstream, an offset according to the determined offset category to the current reconstruction sample.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.
Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
A video decoding method includes determining, from among a first sample and a second sample with different color components, at least one second sample that is used to correct a value of the first sample; determining a filter parameter set based on a band including the value of the first sample, wherein the band is from among a plurality of bands determined by dividing a total range of sample values into signaled intervals or predetermined intervals; and filtering a value of the at least one second sample by using the determined filter parameter set and correcting the value of the first sample by using a value obtained by the filtering, wherein the first sample is any one of a luma sample and a chroma sample, and the second sample is any one of the luma sample and the chroma sample that is not the first sample.
Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
Provided are a video encoding method of adjusting a range of encoded output data to adjust a bit depth during restoring of encoded samples, and a video decoding method of substantially preventing overflow from occurring in output data in operations of a decoding process. The video decoding method includes parsing and restoring quantized transformation coefficients in units of blocks of an image from a received bitstream, restoring transformation coefficients by performing inverse quantization on the quantized transformation coefficients, and restoring samples by performing one-dimensional (1D) inverse transformation and inverse scaling on the quantized transformation coefficients. At least one from among the transformation coefficients and the samples has a predetermined bit depth or less.