Abstract:
A method of decoding video, the method including receiving and parsing a bitstream which includes encoded video; extracting encoded image data relating to a current picture, which image data is assigned to at least one maximum coding unit, information relating to a coded depth and an encoding mode for each of the at least one maximum coding unit, and filter coefficient information for performing loop filtering on the current picture, from the bitstream; decoding the encoded image data in units of the at least one maximum coding unit, based on the information relating to the coded depth and the encoding mode for each of the at least one maximum coding unit; and performing deblocking on the decoded image data relating to the current picture, and performing loop filtering on the deblocked data, based on continuous one-dimensional (1D) filtering.
Abstract:
Entropy encoding and entropy decoding of image data are respectively performed whereby context modeling is performed on a context unit of blocks of the image data based on a context model of a previously encoded or decoded block.
Abstract:
A method and apparatus for encoding and decoding a motion vector of a current block. The method of encoding including: generating information about the motion vector based on a motion vector of a current block and a motion vector predictor of the current block by estimating the motion vector and determining a first motion vector predictor candidate from among a plurality of motion vector predictor candidates as the motion vector predictor based on a result of the estimating; and generating a virtual motion vector by using a second motion vector predictor candidate and the information about the motion vector, generating vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, comparing the vector differences with the information about the motion vector, and selectively excluding the second motion vector predictor candidate according to the comparing.
Abstract:
Provided are a video encoding method and a video decoding method according to spatial subdivisions based on splitting a picture into a first tile and a second tile, and splitting a current tile among the first tile and the second tile into at least one slice segment, encoding the first tile and the second tile, independently from each other, and encoding maximum coding units of a current slice segment among the at least one slice segment included in the current tile, with respect to the at least one slice segment included in the current tile.
Abstract:
Provided are a method and apparatus for estimating a motion vector using a plurality of motion vector predictors, an encoder, a decoder, and a decoding method. The method includes calculating spatial similarities between the current block and the plurality of neighboring partitions around the current block, selecting at least one of the neighboring partitions based on the calculated spatial similarities, and estimating a motion vector of the selected partition as the motion vector of the current block.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.
Abstract:
Encoding and decoding a video using transformation index that indicates information that indicates a structure of a transformation unit transforming data of a current coding unit.
Abstract:
A method and apparatus for encoding and decoding a motion vector of a current block. The method of encoding including: generating information about the motion vector based on a motion vector of a current block and a motion vector predictor of the current block by estimating the motion vector and determining a first motion vector predictor candidate from among a plurality of motion vector predictor candidates as the motion vector predictor based on a result of the estimating; and generating a virtual motion vector by using a second motion vector predictor candidate and the information about the motion vector, generating vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, comparing the vector differences with the information about the motion vector, and selectively excluding the second motion vector predictor candidate according to the comparing.
Abstract:
Provided are video encoding and decoding methods and apparatuses. The video encoding method includes: encoding a video based on data units having a hierarchical structure; determining a context model used for entropy encoding a syntax element of a data unit based on at least one piece of additional information of the data units; and entropy encoding the syntax element by using the determined context model.