Abstract:
Provided are methods and apparatuses for encoding and decoding a motion vector. The method of encoding a motion vector includes: selecting a mode from among a first mode in which information indicating a motion vector predictor of at least one motion vector predictor is encoded and a second mode in which information indicating generation of a motion vector predictor based on pixels included in a previously encoded area adjacent to a current block is encoded; determining a motion vector predictor of the current block according to the selected mode and encoding information about the motion vector predictor of the current block; and encoding a difference vector between a motion vector of the current block and the motion vector predictor of the current block.
Abstract:
An apparatus and method for encoding video data and an apparatus and method for decoding video data are provided. The encoding method includes: splitting a current picture into at least one maximum coding unit; determining a coded depth to output an encoding result by encoding at least one split region of the at least one maximum coding unit according to operating mode of coding tool, respectively, based on a relationship among a depth of at least one coding unit of the at least one maximum coding unit, a coding tool, and an operating mode, wherein the at least one split region is generated by hierarchically splitting the at least one maximum coding unit according to depths; and outputting a bitstream including encoded video data of the coded depth, information regarding a coded depth of at least one maximum coding unit, information regarding an encoding mode, and information regarding the relationship.
Abstract:
A decoding apparatus includes a splitter which splits the image into a plurality of maximum coding units, hierarchically splits a maximum coding unit among the plurality of maximum coding units into a plurality of coding units, and determines one or more transformation residual blocks from a coding unit among the plurality of coding units, wherein the one or more transformation residual blocks include sub residual blocks, a parser which obtains an effective coefficient flag of a sub residual block among the sub residual blocks from a bitstream, the effective coefficient flag of the sub residual block indicating whether at least one non-zero effective transformation coefficient exists in the sub residual block, and when the effective coefficient flag indicates that at least one non-zero transformation coefficient exists in the sub residual block, obtains transformation coefficients of the sub residual block based on location information of the non-zero transformation coefficient and level information of the non-zero transformation coefficient obtained from the bitstream, and an inverse-transformer which performs inverse-transformation on a transformation residual block including the sub residual block based on the transformation coefficients of the sub residual block.
Abstract:
Disclosed is a method of encoding a video, the method including: splitting a current picture into at least one maximum coding unit; determining a coded depth to output a final encoding result according to at least one split region obtained by splitting a region of the maximum coding unit according to depths, by encoding the at least one split region, based on a depth that deepens in proportion to the number of times the region of the maximum coding unit is split; and outputting image data constituting the final encoding result according to the at least one split region, and encoding information about the coded depth and a prediction mode, according to the at least one maximum coding unit.
Abstract:
Disclosed is a method of encoding a video, the method including: splitting a current picture into at least one maximum coding unit; determining a coded depth to output a final encoding result according to at least one split region obtained by splitting a region of the maximum coding unit according to depths, by encoding the at least one split region, based on a depth that deepens in proportion to the number of times the region of the maximum coding unit is split; and outputting image data constituting the final encoding result according to the at least one split region, and encoding information about the coded depth and a prediction mode, according to the at least one maximum coding unit.
Abstract:
A method of decoding an image includes obtaining information that indicates an intra prediction mode of a current block to be decoded, from a bitstream, the intra prediction mode indicating a particular direction among a plurality of directions, the particular direction being indicated by one of dx number of pixels in a horizontal direction and a fixed number of pixels in a vertical direction, and dy number of pixels in the vertical direction and a fixed number of pixels in the horizontal direction and obtaining a number of neighboring pixels located on one side among a left side of the current block and an upper side of the current block according to a position of a current pixel (j,i) and the particular direction (dx or dy) indicated by the intra prediction mode, when the number of the neighboring pixels is 1, obtaining a prediction value of the current pixel based on the neighboring pixel, and when the number of the neighboring pixels is 2, obtaining the prediction value of the current pixel based on a weighted average of the neighboring pixels.
Abstract:
A method and apparatus for decoding video and a method and apparatus for encoding video are provided. The method for decoding video includes: receiving and parsing a bitstream of encoded video; extracting, from the bitstream, encoded image data of a current picture assigned to a maximum coding unit of the current picture, information regarding a coded depth of the maximum coding unit, information regarding an encoding mode, and coding unit pattern information indicating whether texture information of the maximum coding units has been encoded; and decoding the encoded image data for the maximum coding unit, based on the information regarding the coded depth of the maximum coding unit, the information regarding the encoding mode, and the coding unit pattern information.
Abstract:
A method and apparatus for decoding a video and a method and apparatus for encoding a video are provided. The method for decoding the video includes: receiving and parsing a bitstream of an encoded video; extracting, from the bitstream, encoded image data of a current picture of the encoded video assigned to a maximum coding unit, and information about a coded depth and an encoding mode according to the maximum coding unit; and decoding the encoded image data for the maximum coding unit based on the information about the coded depth and the encoding mode for the maximum coding unit, in consideration of a raster scanning order for the maximum coding unit and a zigzag scanning order for coding units of the maximum coding unit according to depths.
Abstract:
Provided are methods and apparatuses for encoding and decoding a motion vector. The method of encoding a motion vector includes: selecting a mode from among a first mode in which information indicating a motion vector predictor of at least one motion vector predictor is encoded and a second mode in which information indicating generation of a motion vector predictor based on pixels included in a previously encoded area adjacent to a current block is encoded; determining a motion vector predictor of the current block according to the selected mode and encoding information about the motion vector predictor of the current block; and encoding a difference vector between a motion vector of the current block and the motion vector predictor of the current block.
Abstract:
An apparatus of decoding an image includes an entropy decoder which obtains information about an intra prediction mode applied to a current block to be decoded, from a bitstream; and an intra prediction performer which obtains one of a left neighboring pixel whose location is determined based on j*dy>>n and a up neighboring pixel whose location is determined based on i*dx>>m, where a current pixel is located on (i,j), dx, dy, m and n are integers and performs intra prediction on the current pixel using one of the left neighboring pixel and the up neighboring pixel.