Abstract:
Provided are methods and apparatus for encoding and decoding motion information. The method of encoding motion information includes: obtaining a motion information candidate by using motion information of prediction units that are temporally or spatially related to a current prediction unit; adding, when the number of motion information included in the motion information candidate is smaller than a predetermined number n, alternative motion information to the motion information candidate so that the number of motion information included in the motion information candidate reaches the predetermined number n; determining motion information with respect to the current prediction unit from among the n motion information candidates; and encoding index information indicating the determined motion information as motion information of the current prediction unit.
Abstract:
A video encoding method and apparatus and a video decoding method and apparatus are provided. The video encoding method includes: prediction encoding in units of a coding unit as a data unit for encoding a picture, by using partitions determined based on a first partition mode and a partition level, so as to select a partition for outputting an encoding result from among the determined partitions; and encoding and outputting partition information representing a first partition mode and a partition level of the selected partition. The first partition mode represents a shape and directionality of a partition as a data unit for performing the prediction encoding on the coding unit, and the partition level represents a degree to which the coding unit is split into partitions for detailed motion prediction.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.
Abstract:
Provided are video encoding and decoding methods and apparatuses. The video encoding method includes: encoding a video based on data units having a hierarchical structure; determining a context model used for entropy encoding a syntax element of a data unit based on at least one piece of additional information of the data units; and entropy encoding the syntax element by using the determined context model.
Abstract:
Disclosed is a method of encoding a video, the method including: splitting a current picture into at least one maximum coding unit; determining a coded depth to output a final encoding result according to at least one split region obtained by splitting a region of the maximum coding unit according to depths, by encoding the at least one split region, based on a depth that deepens in proportion to the number of times the region of the maximum coding unit is split; and outputting image data constituting the final encoding result according to the at least one split region, and encoding information about the coded depth and a prediction mode, according to the at least one maximum coding unit.
Abstract:
An apparatus for decoding an image includes an entropy decoder that performs entropy decoding to generate quantized transformation coefficients of a transformation unit in a coding unit and an inverse transformer that inverse quantizes the quantized transformation coefficients to generate transformation coefficients of the transformation unit and inverse transforms the transformation coefficients to generate residual components of the transformation unit.
Abstract:
A method of decoding an image includes performing entropy-decoding to obtain quantized transformation coefficients of at least one transformation unit in a coding unit of the image, determining a prediction mode of at least one prediction unit in the coding unit from information indicating a prediction mode for the at least one prediction unit, when the prediction mode is determined to be an inter prediction mode, not in an intra prediction mode, determining a size of the at least one transformation unit in the coding unit regardless of a size of the at least one prediction unit in the coding unit, performing inverse-quantization and inverse-transformation on the quantized transformation coefficients of the at least one transformation unit to obtain residuals, and performing inter prediction for at least one prediction unit in the coding unit to generate a predictor and restoring the image by using the residuals and the predictor.
Abstract:
Disclosed is a method of encoding a video, the method including: splitting a current picture into at least one maximum coding unit; determining a coded depth to output a final encoding result according to at least one split region obtained by splitting a region of the maximum coding unit according to depths, by encoding the at least one split region, based on a depth that deepens in proportion to the number of times the region of the maximum coding unit is split; and outputting image data constituting the final encoding result according to the at least one split region, and encoding information about the coded depth and a prediction mode, according to the at least one maximum coding unit.