Abstract:
Methods and apparatuses for encoding and decoding an intra prediction mode of a prediction unit of a chrominance component based on an intra prediction mode of a prediction unit of a luminance component are provided. When an intra prediction mode of a prediction unit of a luminance component is the same as an intra prediction mode in an intra prediction mode candidate group of a prediction unit of a chrominance component, reconstructing the intra prediction mode candidate group of the prediction unit of the chrominance component by excluding or replacing an intra prediction mode of the prediction unit of the chrominance component which is same as an intra prediction mode of the prediction unit of the luminance component from the intra prediction mode candidate group, and encoding the intra prediction mode of the prediction unit of the chrominance component by using the reconstructed intra prediction mode candidate group.
Abstract:
A method and apparatus for intra predicting a video. The method includes: determining availability of a predetermined number of adjacent pixels used for intra prediction of a current block; if a first adjacent pixel is unavailable, searching for a second adjacent pixel that is available by searching the predetermined number of adjacent pixels in a predetermined direction based on the first adjacent pixel; and replacing a pixel value of the first adjacent pixel with a pixel value of a found second adjacent pixel. At least one third adjacent pixel in another location, which is not available and excludes the first adjacent pixel at a predetermined location, is sequentially replaced by using a directly adjacent pixel in a predetermined direction.
Abstract:
Provided are entropy encoding and entropy decoding for video encoding and decoding. The video entropy decoding method includes: determining a bin string and a bin index for a maximum coding unit that is obtained from a bitstream; determining a value of a syntax element by comparing the determined bin string with bin strings that is assignable to the syntax element in the bin index; storing context variables for the maximum coding unit when the syntax element is a last syntax element in the maximum coding unit, a dependent slice segment is includable in a picture in which the maximum coding unit is included, and the maximum coding unit is a last maximum coding unit in a slice segment; and restoring symbols of the maximum coding unit by using the determined value of the syntax element.
Abstract:
Provided are video encoding and decoding methods and apparatuses. The video encoding method includes: encoding a video based on data units having a hierarchical structure; determining a context model used for entropy encoding a syntax element of a data unit based on at least one piece of additional information of the data units; and entropy encoding the syntax element by using the determined context model.
Abstract:
An encoding method and apparatus and a decoding method and apparatus for determining a motion vector of a current block based on a motion vector of at least one previously-encoded or previously-decoded block are provided. The decoding method includes: decoding information regarding a prediction direction from among a first direction, a second direction, and bi-directions, and information regarding pixel values of the current block; determining the prediction direction in which the current block is to be predicted, based on the decoded information regarding the prediction direction, and determining a motion vector for predicting the current block in the determined prediction direction; and restoring the current block, based on the determined motion vector and the decoded information regarding the pixel values, wherein the first direction is a direction from a current picture to a previous picture, and the second direction is a direction from the current picture to a subsequent picture.
Abstract:
A method of decoding video, the method including receiving and parsing a bitstream which includes encoded video; extracting encoded image data relating to a current picture, which image data is assigned to at least one maximum coding unit, information relating to a coded depth and an encoding mode for each of the at least one maximum coding unit, and filter coefficient information for performing loop filtering on the current picture, from the bitstream; decoding the encoded image data in units of the at least one maximum coding unit, based on the information relating to the coded depth and the encoding mode for each of the at least one maximum coding unit; and performing deblocking on the decoded image data relating to the current picture, and performing loop filtering on the deblocked data, based on continuous one-dimensional (1D) filtering.
Abstract:
A method of decoding a video includes determining an initial value of a quantization parameter (QP) used to perform inverse quantization on coding units included in a slice segment, based on syntax obtained from a bitstream; determining a slice-level initial QP for predicting the QP used to perform inverse quantization on the coding units included in the slice segment, based on the initial value of the QP; and determining a predicted QP of a first quantization group of a parallel-decodable data unit included in the slice segment, based on the slice-level initial QP.
Abstract:
A method of determining a reference image for inter-prediction includes: determining a slice type of a block; if the determining of the slice type indicates that the slice type is a B-slice type configured for uni-directional prediction or bi-directional prediction, determining an inter-prediction direction of the block to be one of a first direction, a second direction, and a bi-direction; if the determining of the inter-prediction direction indicates that the inter-prediction direction is not the second direction, determining a first direction reference index from a first direction reference picture list as a reference index for the block; and if the determining of the inter-prediction direction indicates that the inter-prediction direction is not the first direction, determining a second direction reference index from a second direction reference picture list as a reference index for the block.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.
Abstract:
Provided are a method and apparatus for intra predicting an image, which generate a prediction value via linear interpolation in horizontal and vertical directions of a current prediction unit. The method includes: generating first and second virtual pixels by using at least one adjacent pixel located upper right and lower left to a current prediction unit; obtaining a first prediction value of a current pixel via linear interpolation using an adjacent left pixel located on the same line as the first virtual pixel and the current pixel; obtaining a second prediction value of the current pixel via linear interpolation using an adjacent upper pixel located on the same column as the second virtual pixel and the current pixel; and obtaining a prediction value of the current pixel by using the first and second prediction values.