Abstract:
Provided are an image encoding/decoding apparatus and method using data hiding, in which, when a difference between scan positions of a final effective transform coefficient and an initial effective transform coefficient of a sub-block of a current transform unit is greater than a threshold value, an intra-prediction direction of a current coding unit is determined using parity of the sum of transform coefficients of the sub-block corresponding to certain scan positions or a level of an effective transform coefficient among transform coefficients of the sub-block is corrected such that the parity of the sum of the transform coefficients indicates the intra-prediction direction of the current coding unit. Encoding and decoding efficiencies may be improved by reducing a bitrate by using a method of hiding data in parity of effective transform coefficients.
Abstract:
Provided are a video decoding method and a video decoding apparatus capable of performing the video decoding method. The video decoding method includes: determining neighboring pixels of a current block to be used for performing intra prediction on the current block; acquiring, from a bitstream, information indicating one of a plurality of filtering methods used on the neighboring pixels; selecting one of the plurality of filtering methods according to the acquired information; filtering the neighboring pixels by using the selected filtering method; and performing the intra prediction on the current block by using the filtered neighboring pixels, wherein the plurality of filtering methods comprise a spatial domain filtering method and a frequency domain filtering method, wherein the spatial domain filtering method filters the neighboring pixels in a spatial domain, and the frequency domain filtering method filters the neighboring pixels in a frequency domain.
Abstract:
Disclosed is an encoding device. The present encoding device comprises a processor which: divides a target block of the current frame into a first area and a second area according to a predetermined division method and an interface communicating with a decoding device; searches for a first motion vector for the first area in a first reference frame, so as to generate a first prediction block including an area corresponding to the first area; divides the first prediction block into a third area and a fourth area according to the predetermined division method, and generates boundary information; searches for a second motion vector for the fourth area corresponding to the second area in a second reference frame, and generates a second prediction block including an area corresponding to the fourth area; merges the first prediction block and the second prediction block according to the boundary information so as to generate a third prediction block corresponding to the target block; and controls the interface to transmit the first motion vector and the second motion vector to the decoding device.
Abstract:
Provided is a method of decoding a video according to an embodiment, the method including determining at least one processing block for splitting the video; determining an order of determining at least one largest coding unit in the at least one processing block; determining at least one largest coding unit on the basis of the determined order; and decoding the determined at least one largest coding unit, wherein the order is one of a plurality of orders for determining a largest coding unit.
Abstract:
Provided is an inter-layer video decoding method including obtaining a disparity vector of a current block included in a first layer image; determining a block of a second layer image corresponding to the current block by using the obtained disparity vector; determining a reference block including a sample that contacts a boundary of the block; obtaining a motion vector of the reference block; and determining a motion vector of the current block included in the first layer image by using the obtained motion vector.
Abstract:
Disclosed is an inter-layer video decoding method including decoding a first layer image, determining a reference location of the first layer image corresponding to a location of a second layer current block, determining neighboring sample values by using sample values of a boundary of the first layer image when neighboring sample locations of the reference location are outside the boundary of the first layer image, and determining an illumination compensation parameter of the second layer current block based on the neighboring sample values.
Abstract:
Provided is an inter-layer video decoding method. The inter-layer video decoding method includes: determining whether a current block is split into two or more regions by using a depth block corresponding to the current block; generating a merge candidate list including at least one merge candidate for the current block, based on a result of the determination; determining motion information of the current block by using motion information of one of the at least one merge candidate included in the merge candidate list; and decoding the current block by using the determined motion information, wherein the generating of the merge candidate list includes determining whether a view synthesis prediction candidate is available as the merge candidate according to the result of the determination.
Abstract:
Provided is an inter-layer video decoding method including: obtaining prediction mode information of a depth image; generating a prediction block of a current block forming the depth image, based on the obtained prediction mode information; and decoding the depth image by using the prediction block, wherein the obtaining of the prediction mode information includes obtaining a first flag, which indicates whether the depth image allows a method of predicting the depth image by splitting blocks forming the depth image into at least two partitions using a wedgelet as a boundary, and a second flag, which indicates whether the depth image allows a method of predicting the depth image by splitting the blocks forming the depth image into at least two partitions using a contour as a boundary.
Abstract:
Provided is an inter-layer video decoding method including: obtaining motion inheritance information from a bitstream; when the motion inheritance information indicates that motion information of a block of a first layer, which corresponds to a current block of a second layer, is usable as motion information of the second layer, determining whether motion information of a sub-block including a pixel at a predetermined location of the block of the first layer from among sub-blocks of the block of the first layer, which correspond to sub-blocks of the current block, is usable; when it is determined that the motion information of the sub-block including the pixel at the predetermined location of the block of the first layer is usable, obtaining motion information of the sub-blocks of the block of the first layer; and determining motion information of the sub-blocks of the current block based on the obtained motion information of the sub-blocks of the block of the first layer.
Abstract:
An inter-view video decoding method may include determining a disparity vector of a current second-view depth block by using a specific sample value selected within a sample value range determined based on a preset bit-depth, detecting a first-view depth block corresponding to the current second-view depth block by using the disparity vector, and reconstructing the current second-view depth block by generating a prediction block of the current second-view depth block based on coding information of the first-view depth block.