Abstract:
Provided are a method and apparatus for encoding a video and a method and apparatus for decoding a video. The encoding method includes: splitting a picture of the video into one or more maximum coding units; encoding the picture based on coding units according to depths which are obtained based on a partition type determined according to the depths of the coding units according to depths, determining coding units according to coded depths with respect to each of the coding units according to depths, and thus determining coding units having a tree structure; and outputting data that is encoded based on the partition type and the coding units having the tree structure, information about the coded depths and an encoding mode, and coding unit structure information indicating a size and a variable depth of a coding unit.
Abstract:
Provided are a method and apparatus for encoding a video and a method and apparatus for decoding a video. The encoding method includes: splitting a picture of the video into one or more maximum coding units; encoding the picture based on coding units according to depths which are obtained based on a partition type determined according to the depths of the coding units according to depths, determining coding units according to coded depths with respect to each of the coding units according to depths, and thus determining coding units having a tree structure; and outputting data that is encoded based on the partition type and the coding units having the tree structure, information about the coded depths and an encoding mode, and coding unit structure information indicating a size and a variable depth of a coding unit.
Abstract:
Provided are a method and apparatus for determining a context model for entropy encoding and decoding of a transformation coefficient. According to the method and apparatus, a context set index ctxset is obtained based on color component information of a transformation unit, a location of a current subset, and whether there is a significant transformation coefficient having a value greater than a first critical value in a previous subset, and a context offset c1 is obtained based on a length of a previous transformation coefficient having consecutive 1s. Also, a context index ctxids for entropy encoding and decoding of a first critical value flag is determined based on the obtained context set index and the context offset.
Abstract:
Provided are a method and apparatus for determining a context model for entropy encoding and decoding of a transformation coefficient. According to the method and apparatus, a context set index ctxset is obtained based on color component information of a transformation unit, a location of a current subset, and whether there is a significant transformation coefficient having a value greater than a first critical value in a previous subset, and a context offset c1 is obtained based on a length of a previous transformation coefficient having consecutive 1s. Also, a context index ctxids for entropy encoding and decoding of a first critical value flag is determined based on the obtained context set index and the context offset.
Abstract:
A method and apparatus for decoding video and a method and apparatus for encoding video are provided. The method for decoding video includes: receiving and parsing a bitstream of encoded video; extracting, from the bitstream, encoded image data of a current picture assigned to a maximum coding unit of the current picture, information regarding a coded depth of the maximum coding unit, information regarding an encoding mode, and coding unit pattern information indicating whether texture information of the maximum coding units has been encoded; and decoding the encoded image data for the maximum coding unit, based on the information regarding the coded depth of the maximum coding unit, the information regarding the encoding mode, and the coding unit pattern information.
Abstract:
Provided are a method and apparatus for determining a context model for entropy encoding and decoding of a transformation coefficient. According to the method and apparatus, a context set index ctxset is obtained based on color component information of a transformation unit, a location of a current subset, and whether there is a significant transformation coefficient having a value greater than a first critical value in a previous subset, and a context offset c1 is obtained based on a length of a previous transformation coefficient having consecutive 1s. Also, a context index ctxids for entropy encoding and decoding of a first critical value flag is determined based on the obtained context set index and the context offset.
Abstract:
A video decoding method including: extracting, from a bitstream of an encoded video, at least one of information indicating independent parsing of a data unit and information indicating independent decoding of a data unit; extracting encoded video data and information about a coded depth and an encoding mode according to maximum coding units by parsing the bitstream based on the information indicating independent parsing of the data unit; and decoding at least one coding unit according to a coded depth of each maximum coding unit of the encoded video data, based on the information indicating independent decoding in the data unit and the information about the coded depth and the encoding mode according to maximum coding units.
Abstract:
A method and apparatus for encoding video by using deblocking filtering, and a method and apparatus for decoding video by using deblocking filtering are provided. The method of encoding video includes: splitting a picture into a maximum coding unit; determining coding units of coded depths and encoding modes for the coding units of the maximum coding unit by prediction encoding the coding units of the maximum coding unit based on at least one prediction unit and transforming the coding units based on at least one transformation unit, wherein the maximum coding unit is hierarchically split into the coding units as a depth deepens, and the coded depths are depths where the maximum coding unit is encoded in the coding units; and performing deblocking filtering on video data being inversely transformed into a spatial domain in the coding units, in consideration of the encoding modes.
Abstract:
A method and apparatus for encoding video by using deblocking filtering, and a method and apparatus for decoding video by using deblocking filtering are provided. The method of encoding video includes: splitting a picture into a maximum coding unit; determining coding units of coded depths and encoding modes for the coding units of the maximum coding unit by prediction encoding the coding units of the maximum coding unit based on at least one prediction unit and transforming the coding units based on at least one transformation unit, wherein the maximum coding unit is hierarchically split into the coding units as a depth deepens, and the coded depths are depths where the maximum coding unit is encoded in the coding units; and performing deblocking filtering on video data being inversely transformed into a spatial domain in the coding units, in consideration of the encoding modes.
Abstract:
A method and apparatus for decoding a video and a method and apparatus for encoding a video are provided. The method for decoding the video includes: receiving and parsing a bitstream of an encoded video; extracting, from the bitstream, encoded image data of a current picture of the encoded video assigned to a maximum coding unit, and information about a coded depth and an encoding mode according to the maximum coding unit; and decoding the encoded image data for the maximum coding unit based on the information about the coded depth and the encoding mode for the maximum coding unit, in consideration of a raster scanning order for the maximum coding unit and a zigzag scanning order for coding units of the maximum coding unit according to depths.