Abstract:
Provided are a multi-view video decoding apparatus and method and a multi-view encoding apparatus and method. The decoding method includes: determining whether a prediction mode of a current block being decoded is a merge mode; when the prediction mode is determined to be the merge mode, forming a merge candidate list including at least one of an inter-view candidate, a spatial candidate, a disparity candidate, a view synthesis prediction candidate, and a temporal candidate; and predicting the current block by selecting a merge candidate for predicting the current block from the merge candidate list, wherein whether to include, in the merge candidate list, at least one of a view synthesis prediction candidate for an adjacent block of the current block and a view synthesis prediction candidate for the current block is determined based on whether view synthesis prediction is performed on the adjacent block and the current block.
Abstract:
An interlayer video decoding method comprises reconstructing a first layer image based on encoding information acquired from a first layer bitstream; reconstructing a second layer current block determined as a predetermined partition mode and a prediction mode by using interlayer prediction information acquired from a second layer bitstream and a first layer reference block corresponding to a current block of a first layer reconstruction image that is to be reconstructed in a second layer; determining whether to perform luminance compensation on the second layer current block in a partition mode in which the second layer current block is not split; and compensating for luminance of the second layer current block according to whether luminance compensation is performed and reconstructing a second layer image including the second layer current block of which luminance is compensated for.
Abstract:
A method of generating a parameter set includes obtaining common information commonly inserted into at least two lower parameter sets referring to the same upper parameter set; determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets; and adding the common information to at least one among the upper parameter set and the at least two lower parameter sets, based on a result of determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets.
Abstract:
Provided is a multi-layer video decoding method. The multi-layer video decoding method includes: obtaining, from a bitstream, dependency information indicating whether a first layer refers to a second layer; if the dependency information indicates that the first layer refers to the second layer, obtaining a reference picture set of the first layer, based on whether type information of the first layer and type information of the second layer are equal to each other; and decoding encoded data of a current image included in the first layer, based on the reference picture set.
Abstract:
Provided is an inter-layer video decoding method. The inter-layer video decoding method includes: determining whether a current block is split into two or more regions by using a depth block corresponding to the current block; generating a merge candidate list including at least one merge candidate for the current block, based on a result of the determination; determining motion information of the current block by using motion information of one of the at least one merge candidate included in the merge candidate list; and decoding the current block by using the determined motion information, wherein the generating of the merge candidate list includes determining whether a view synthesis prediction candidate is available as the merge candidate according to the result of the determination.
Abstract:
Provided is an inter-layer video decoding method including: obtaining prediction mode information of a depth image; generating a prediction block of a current block forming the depth image, based on the obtained prediction mode information; and decoding the depth image by using the prediction block, wherein the obtaining of the prediction mode information includes obtaining a first flag, which indicates whether the depth image allows a method of predicting the depth image by splitting blocks forming the depth image into at least two partitions using a wedgelet as a boundary, and a second flag, which indicates whether the depth image allows a method of predicting the depth image by splitting the blocks forming the depth image into at least two partitions using a contour as a boundary.
Abstract:
An inter-view video decoding method may include determining a disparity vector of a current second-view depth block by using a specific sample value selected within a sample value range determined based on a preset bit-depth, detecting a first-view depth block corresponding to the current second-view depth block by using the disparity vector, and reconstructing the current second-view depth block by generating a prediction block of the current second-view depth block based on coding information of the first-view depth block.
Abstract:
Provided is a merge mode for determining, by using motion information of another block, motion information of pictures that construct a multiview video. A multiview video decoding method includes obtaining motion inheritance information specifying whether or not motion information of a corresponding block of a first layer which corresponds to a current block of a second layer is available as motion information of the second layer, obtaining a merge candidate list by selectively including the motion information of the corresponding block in merge candidates when the current block that was encoded according to the merge mode is decoded, determining a merge candidate included in the merge candidate list according to merge candidate index information, and obtaining motion information of the current block, based on the merge candidate.
Abstract:
Provided is a video decoding method including acquiring simplified depth coding (SDC) mode information indicating whether an SDC mode for encoding a residual block of a prediction unit included in a coding unit of a depth image by using one residual DC component is applied to the coding unit, acquiring the residual DC component corresponding to the prediction unit of the coding unit to which the SDC mode is applied based on the SDC mode information, and reconstructing a current block of the coding unit by using the residual DC component and a reference block of the prediction unit.
Abstract:
Provided are a scalable video decoding method and a scalable video decoding apparatus. The scalable video decoding method includes obtaining, from a bitstream, scalability mask information specifying whether scalable video decoding is performed according to each of scalability types in a current video, and when scalability mask information for scalable video decoding an auxiliary picture indicates performance, determining one of auxiliary picture types comprising an alpha plane and a depth picture of a primary picture of another layer, by using a scalability index indicating a type of the auxiliary picture to be decoded in a current layer, and decoding the auxiliary picture of the current layer.