Abstract:
A method and apparatus for encoding and decoding a motion vector of a current block. The method of encoding including: generating information about the motion vector based on a motion vector of a current block and a motion vector predictor of the current block by estimating the motion vector and determining a first motion vector predictor candidate from among a plurality of motion vector predictor candidates as the motion vector predictor based on a result of the estimating; and generating a virtual motion vector by using a second motion vector predictor candidate and the information about the motion vector, generating vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, comparing the vector differences with the information about the motion vector, and selectively excluding the second motion vector predictor candidate according to the comparing.
Abstract:
A video decoding method including: extracting, from a bitstream of an encoded video, at least one of information indicating independent parsing of a data unit and information indicating independent decoding of a data unit; extracting encoded video data and information about a coded depth and an encoding mode according to maximum coding units by parsing the bitstream based on the information indicating independent parsing of the data unit; and decoding at least one coding unit according to a coded depth of each maximum coding unit of the encoded video data, based on the information indicating independent decoding in the data unit and the information about the coded depth and the encoding mode according to maximum coding units.
Abstract:
A video decoding method including: extracting, from a bitstream of an encoded video, at least one of information indicating independent parsing of a data unit and information indicating independent decoding of a data unit; extracting encoded video data and information about a coded depth and an encoding mode according to maximum coding units by parsing the bitstream based on the information indicating independent parsing of the data unit; and decoding at least one coding unit according to a coded depth of each maximum coding unit of the encoded video data, based on the information indicating independent decoding in the data unit and the information about the coded depth and the encoding mode according to maximum coding units.
Abstract:
A method and apparatus for determining an intra prediction mode of a coding unit. Candidate intra prediction modes of a chrominance component coding unit, which includes an intra prediction mode of a luminance component coding unit, are determined, and costs of the chrominance component coding unit according to the determined candidate intra prediction modes are compared to determine a minimum cost intra prediction mode to be the intra prediction mode of the chrominance component coding unit.
Abstract:
Provided are methods and apparatuses for encoding and decoding a motion vector. The method of encoding a motion vector includes: selecting a mode from among a first mode in which information indicating a motion vector predictor of at least one motion vector predictor is encoded and a second mode in which information indicating generation of a motion vector predictor based on pixels included in a previously encoded area adjacent to a current block is encoded; determining a motion vector predictor of the current block according to the selected mode and encoding information about the motion vector predictor of the current block; and encoding a difference vector between a motion vector of the current block and the motion vector predictor of the current block.
Abstract:
An apparatus and method for encoding video data and an apparatus and method for decoding video data are provided. The encoding method includes: splitting a current picture into at least one maximum coding unit; determining a coded depth to output an encoding result by encoding at least one split region of the at least one maximum coding unit according to operating mode of coding tool, respectively, based on a relationship among a depth of at least one coding unit of the at least one maximum coding unit, a coding tool, and an operating mode, wherein the at least one split region is generated by hierarchically splitting the at least one maximum coding unit according to depths; and outputting a bitstream including encoded video data of the coded depth, information regarding a coded depth of at least one maximum coding unit, information regarding an encoding mode, and information regarding the relationship.
Abstract:
A decoding apparatus includes a splitter which splits the image into a plurality of maximum coding units, hierarchically splits a maximum coding unit among the plurality of maximum coding units into a plurality of coding units, and determines one or more transformation residual blocks from a coding unit among the plurality of coding units, wherein the one or more transformation residual blocks include sub residual blocks, a parser which obtains an effective coefficient flag of a sub residual block among the sub residual blocks from a bitstream, the effective coefficient flag of the sub residual block indicating whether at least one non-zero effective transformation coefficient exists in the sub residual block, and when the effective coefficient flag indicates that at least one non-zero transformation coefficient exists in the sub residual block, obtains transformation coefficients of the sub residual block based on location information of the non-zero transformation coefficient and level information of the non-zero transformation coefficient obtained from the bitstream, and an inverse-transformer which performs inverse-transformation on a transformation residual block including the sub residual block based on the transformation coefficients of the sub residual block.
Abstract:
An apparatus for decoding an image includes an entropy decoder that performs entropy decoding to generate quantized transformation coefficients of a transformation unit in a coding unit and an inverse transformer that inverse quantizes the quantized transformation coefficients to generate transformation coefficients of the transformation unit and inverse transforms the transformation coefficients to generate residual components of the transformation unit.
Abstract:
A method of decoding an image includes performing entropy-decoding to obtain quantized transformation coefficients of at least one transformation unit in a coding unit of the image, determining a prediction mode of at least one prediction unit in the coding unit from information indicating a prediction mode for the at least one prediction unit, when the prediction mode is determined to be an inter prediction mode, not in an intra prediction mode, determining a size of the at least one transformation unit in the coding unit regardless of a size of the at least one prediction unit in the coding unit, performing inverse-quantization and inverse-transformation on the quantized transformation coefficients of the at least one transformation unit to obtain residuals, and performing inter prediction for at least one prediction unit in the coding unit to generate a predictor and restoring the image by using the residuals and the predictor.
Abstract:
Provided are methods and apparatuses for encoding and decoding a motion vector. The method of encoding a motion vector includes: selecting a mode from among a first mode in which information indicating a motion vector predictor of at least one motion vector predictor is encoded and a second mode in which information indicating generation of a motion vector predictor based on pixels included in a previously encoded area adjacent to a current block is encoded; determining a motion vector predictor of the current block according to the selected mode and encoding information about the motion vector predictor of the current block; and encoding a difference vector between a motion vector of the current block and the motion vector predictor of the current block.