Abstract:
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Abstract:
Provided are methods and apparatuses for encoding and decoding an image. The method of encoding includes: determining a maximum size of a buffer to decode each image frame by a decoder, a number of image frames to be reordered, and latency information of an image frame having a largest difference between an encoding order and a display order from among image frames that form an image sequence, based on an encoding order the image frames that form the image sequence, an encoding order of reference frames referred to by the image frames, a display order of the image frames, and a display order of the reference frames; and adding, to a mandatory sequence parameter set, a first syntax indicating the maximum size of the buffer, a second syntax indicating the number of image frames to be reordered, and a third syntax indicating the latency information.
Abstract:
A decoding method includes obtaining random access point (RAP) reference layer number information indicating a number of layers referred to for performing inter layer prediction on RAP images among current layer images and non-RAP reference layer number information indicating a number of different layers referred to for performing inter layer prediction on non-RAP images, from a video stream including images encoded for a plurality of layers, obtaining RAP reference layer identification information for a layer referred to for predicting the RAP images based on the obtained RAP reference layer number information, from the video stream, obtaining non-RAP reference layer identification information for a layer referred to for predicting the non-RAP images based on the obtained non-RAP reference layer number information, and reconstructing a RAP image and a non-RAP image based on layer images indicated by the obtained RAP reference layer identification information and the obtained non-RAP reference layer identification information.
Abstract:
Methods and apparatuses for arithmetic encoding/decoding of video data. The arithmetic decoding method includes arithmetically decoding prefix bit strings representing a two-dimensional location of a last significant coefficient in a block sequentially by using a context model, arithmetically decoding suffix bit strings in a bypass mode, and performing inverse binarization on the arithmetically decoded prefix bit strings and suffix bit strings to acquire the location of the last significant coefficient in the block.
Abstract:
A method of decoding video including obtaining bit strings corresponding to current transformation coefficient level information by arithmetic decoding a bitstream based on a context model; determining a current binarization parameter by updating or maintaining a previous binarization parameter based on a comparison of a predetermined value and a size of a previous transformation coefficient; obtaining the current transformation coefficient level information by performing de-binarization of the bit strings using the determined current binarization parameter; and generating a size of a current transformation coefficient using the current transformation coefficient level information, wherein the determining the current binarization parameter further comprising: when the previous binarization parameter is larger than the predetermined value, updating the previous binarization parameter; and when the previous binarization parameter is not larger than the predetermined value, maintaining the previous binarization parameter.
Abstract:
Methods and apparatuses for decoding and encoding video are provided. A method includes obtaining bit strings corresponding to current transformation coefficient level information by arithmetic decoding a bitstream based on a context model that indicates a probability as to whether a bit from a bit string is a one or a zero, updating or maintaining a previous binarization parameter based on a comparison of a predetermined value and a size of a previous transformation coefficient, obtaining the current transformation coefficient level information by performing de-binarization of the bit strings using the determined current binarization parameter, and generating a size of a current transformation coefficient using the current transformation coefficient level information.
Abstract:
A multilayer video encoding method includes encoding a multilayer video, generating network adaptive layer (NAL) units for data units included in the encoded multilayer video, and adding scalable extension type information, for a scalable extension of the multilayer video, to a video parameter set (VPS) NAL unit among the NAL units, the VPS NAL unit including VPS information that is information commonly applied to the multilayer video.
Abstract:
A video encoding method and apparatus using fast edge detection for determining a split shape of a picture is disclosed. A split shape of coding units having a tree structure is obtained by replacing a sampling unit having a predetermined size with one of an edge pixel and a normal pixel based on a maximum high frequency component obtained through orthogonal transformation on the sampling unit and obtaining a down-sampled picture, and repeatedly performing a process of splitting the down-sampled picture into the coding units and splitting the coding unit into lower coding units according to whether the edge pixel is present in the coding unit.
Abstract:
Disclosed are a scalable video encoding method and apparatus and a scalable video decoding method and apparatus. The scalable video encoding method adds, into a bitstream, table index information representing one of a plurality of scalable extension type information tables in which available combinations of a plurality of scalable extension types are specified and layer index information representing the scalable extension type of the encoded video among combinations of a plurality of scalable extension types included in a scalable extension type information table.
Abstract:
A video encoding method includes: generating encoding symbols by performing source coding on subregions formed by splitting a picture in a vertical direction, based on blocks having a predetermined size; determining a reference block to be referred to for determining code probability information of a start block in a current subregion, the reference block being determined from among boundary blocks of a neighboring subregion which are encoded before the start block and adjacent to a boundary between the current subregion and the neighboring subregion; performing entropy encoding on blocks of the current subregion, starting from the start block, by using the encoding symbols of the blocks of the current subregion based on the code probability information of the start block determined by using code probability information of the determined reference block; and performing entropy encoding on another subregion in parallel with performing entropy encoding on the current subregion.