Abstract:
An apparatus for decoding an image, the apparatus including an entropy decoder that performs entropy-decoding to obtain quantized transformation coefficients of at least one transformation unit in a coding unit of the image, a decoder that determines a prediction mode of at least one prediction unit in the coding unit from information indicating the prediction mode for the at least one prediction unit, when the prediction mode is determined to be an inter prediction mode, not in an intra prediction mode, determines a size of the at least one transformation unit in the coding unit regardless of a size of the at least one prediction unit in the coding unit, and performs inverse-quantization and inverse-transformation on the quantized transformation coefficients of the at least one transformation unit to obtain residuals, and a restorer that performs inter prediction for at least one prediction unit in the coding unit to generate a predictor and restores the image by using the residuals and the predictor.
Abstract:
Disclosed are an image encoding method and apparatus for encoding an image by grouping a plurality of adjacent prediction units into a transformation unit and transforming the plurality of adjacent prediction into a frequency domain, and an image decoding method and apparatus for decoding an image encoded by using the image encoding method and apparatus.
Abstract:
Disclosed is a method of encoding a video, the method including: splitting a current picture into at least one maximum coding unit; determining a coded depth to output a final encoding result according to at least one split region obtained by splitting a region of the maximum coding unit according to depths, by encoding the at least one split region, based on a depth that deepens in proportion to the number of times the region of the maximum coding unit is split; and outputting image data constituting the final encoding result according to the at least one split region, and encoding information about the coded depth and a prediction mode, according to the at least one maximum coding unit.
Abstract:
Provided are methods and apparatus for encoding and decoding motion information. The method of encoding motion information includes: obtaining a motion information candidate by using motion information of prediction units that are temporally or spatially related to a current prediction unit; adding, when the number of motion information included in the motion information candidate is smaller than a predetermined number n, alternative motion information to the motion information candidate so that the number of motion information included in the motion information candidate reaches the predetermined number n; determining motion information with respect to the current prediction unit from among the n motion information candidates; and encoding index information indicating the determined motion information as motion information of the current prediction unit.
Abstract:
Provided are methods and apparatus for encoding and decoding motion information. The method of encoding motion information includes: obtaining a motion information candidate by using motion information of prediction units that are temporally or spatially related to a current prediction unit; adding, when the number of motion information included in the motion information candidate is smaller than a predetermined number n, alternative motion information to the motion information candidate so that the number of motion information included in the motion information candidate reaches the predetermined number n; determining motion information with respect to the current prediction unit from among the n motion information candidates; and encoding index information indicating the determined motion information as motion information of the current prediction unit.
Abstract:
Provided are methods and apparatus for encoding and decoding motion information. The method of encoding motion information includes: obtaining a motion information candidate by using motion information of prediction units that are temporally or spatially related to a current prediction unit; adding, when the number of motion information included in the motion information candidate is smaller than a predetermined number n, alternative motion information to the motion information candidate so that the number of motion information included in the motion information candidate reaches the predetermined number n; determining motion information with respect to the current prediction unit from among the n motion information candidates; and encoding index information indicating the determined motion information as motion information of the current prediction unit.
Abstract:
A video encoding method and apparatus and a video decoding method and apparatus are provided. The video encoding method includes: prediction encoding in units of a coding unit as a data unit for encoding a picture, by using partitions determined based on a first partition mode and a partition level, so as to select a partition for outputting an encoding result from among the determined partitions; and encoding and outputting partition information representing a first partition mode and a partition level of the selected partition. The first partition mode represents a shape and directionality of a partition as a data unit for performing the prediction encoding on the coding unit, and the partition level represents a degree to which the coding unit is split into partitions for detailed motion prediction.
Abstract:
Methods and apparatuses for encoding and decoding an intra prediction mode of a prediction unit of a chrominance component based on an intra prediction mode of a prediction unit of a luminance component are provided. When an intra prediction mode of a prediction unit of a luminance component is the same as an intra prediction mode in an intra prediction mode candidate group of a prediction unit of a chrominance component, reconstructing the intra prediction mode candidate group of the prediction unit of the chrominance component by excluding or replacing an intra prediction mode of the prediction unit of the chrominance component which is same as an intra prediction mode of the prediction unit of the luminance component from the intra prediction mode candidate group, and encoding the intra prediction mode of the prediction unit of the chrominance component by using the reconstructed intra prediction mode candidate group.
Abstract:
A video encoding method, a video encoding apparatus, a video decoding method, and a video decoding apparatus are provided. The video encoding method includes producing a fast transform matrix based on a transform matrix which is used for frequency transformation on a block which has a predetermined size; producing a transformed block by transforming the block having the predetermined size by using the fast transform matrix; and performing scaling with respect to the transformed block in order to correct a difference between the transform matrix used for the frequency transformation and the fast transform matrix.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.