Abstract:
An encoding apparatus encoding a bitstream including an image frame is disclosed. The encoding apparatus comprises a selection unit for selecting a plurality of pixels including non-zero transform coefficients in a transform coefficient block constituting an image frame, an inverse transform unit for generating a plurality of groups of code candidates including combinations of codes assignable to the non-zero transform coefficients of the selected plurality of pixels and generating candidate reconstruction blocks by performing an inverse transform on each of the transform coefficient blocks in which the sign is assigned to the non-zero transform coefficients according to the generated plurality of groups of code candidates, a cost calculation unit for calculating a cost on the basis of a pixel value difference between pixel values of a plurality of pixels selected from among the generated candidate reconstruction blocks and pixel values of other pixels adjacent to the selected plurality of pixels and an encoding unit for assigning different predetermined codewords to a plurality of groups of code candidates on the basis of the calculated cost and encoding one codeword of the codewords into encoding information of non-zero transform coefficients of the selected plurality of pixels.
Abstract:
Provided are a video encoding method and apparatus and a video decoding method and apparatus for producing a reconstructed video having a minimum error with respect to an original video. The video decoding method includes parsing an edge correction parameter from a bitstream, the edge correction parameter being used to correct a reconstructed pixel included in a current block, determining whether the reconstructed pixel is included in an edge region according to a first threshold value included in the edge correction parameter, determining whether the reconstructed pixel is to be corrected according to a second threshold value included in the edge correction parameter when the reconstructed pixel is included in the edge region, and compensating for a sample value of the reconstructed pixel according to a third threshold value included in the edge correction parameter when the reconstructed pixel is to be corrected.
Abstract:
The present disclosure relates to technologies for a sensor network, machine-to-machine (M2M) communication, machine type communication (MTC), and an Internet of Things (IoT) network. The present disclosure may be used in intelligence services based on such technologies (smart homes, smart buildings, smart cities, smart cars or connected cars, healthcare, digital education, retail business, and security and safety-related services). Provided is a method of transmitting encrypted data for preventing identification of transmitting and receiving devices, from a first device to a second device, the method including: generating an encryption key for encrypting data; generating key identification information by using the generated encryption key and encrypting the data; and transmitting a data set including the encrypted data and the key identification information to the second device.
Abstract:
Provided are multilayer video encoding/decoding methods and apparatuses. A multilayer video decoding method may comprise obtaining, from a bitstream, information indicating a maximum size of a decoded picture buffer (DPB) with respect to a layer set comprising a plurality of layers; determining a size of the DPB with respect to the layer set based on the obtained information indicating the maximum size of the DPB; and storing a decoded picture of the layer set in the DPB of the determined size.
Abstract:
Provided are a video decoding method and a video decoding apparatus capable of performing the video decoding method. The video decoding method includes: determining neighboring pixels of a current block to be used for performing intra prediction on the current block; acquiring, from a bitstream, information indicating one of a plurality of filtering methods used on the neighboring pixels; selecting one of the plurality of filtering methods according to the acquired information; filtering the neighboring pixels by using the selected filtering method; and performing the intra prediction on the current block by using the filtered neighboring pixels, wherein the plurality of filtering methods comprise a spatial domain filtering method and a frequency domain filtering method, wherein the spatial domain filtering method filters the neighboring pixels in a spatial domain, and the frequency domain filtering method filters the neighboring pixels in a frequency domain.
Abstract:
Provided are methods and apparatuses for encoding and decoding an image. Method of encoding includes: determining a maximum size of a buffer to decode each image frame by a decoder, a number of image frames to be reordered, and latency information of an image frame having a largest difference between an encoding order and a display order from among image frames that form an image sequence, based on an encoding order the image frames that form the image sequence, an encoding order of reference frames referred to by the image frames, a display order of the image frames, and a display order of the reference frames; and adding, to a mandatory sequence parameter set, a first syntax indicating the maximum size of the buffer, a second syntax indicating the number of image frames to be reordered, and a third syntax indicating the latency information.
Abstract:
Methods and apparatuses for decoding and encoding video are provided. A method of decoding video includes obtaining bit strings corresponding to current transformation coefficient level information by arithmetic decoding a bitstream based on a context model; determining a current binarization parameter by updating or maintaining a previous binarization parameter based on a comparison of a threshold and a size of a previous transformation coefficient; obtaining the current transformation coefficient level information by performing de-binarization of the bit strings using the determined current binarization parameter; and generating a size of a current transformation coefficient using the current transformation coefficient level information, wherein the current binarization parameter has a value equal to or smaller than a predetermined value.
Abstract:
A video stream decoding method includes obtaining random access point (RAP) reference layer number information indicating the number of layers referred to for performing inter layer prediction on RAP images among current layer images and non-RAP reference layer number information indicating the number of different layers referred to for performing inter layer prediction on non-RAP images, from a video stream regarding images encoded for a plurality of layers; obtaining RAP reference layer identification information for each layer referred to for predicting the RAP images based on the obtained RAP reference layer number information, from the video stream; obtaining non-RAP reference layer identification information for each layer referred to for predicting the non-RAP images based on the obtained non-RAP reference layer number information, from the video stream; reconstructing an RAP image of a current layer based on a layer image indicated by the obtained RAP reference layer identification information; and reconstructing a non-RAP image of the current layer based on a layer image indicated by the obtained non-RAP reference layer identification information.
Abstract:
A method of determining an offset includes dividing a current image into a plurality of blocks, determining a category of pixels in each of the plurality of blocks based on values of neighboring pixels, determining an offset value for pixels belonging to the category, and adjusting the offset value based on characteristics of the category and a background pixel value of each of the pixels. The offset value is an average of differences between values of original images and restored images of pixels belonging to one category. The background pixel value is an average of values of pixels in a background pixel block to which the pixels belong among background pixel blocks divided to calculate a background pixel value.
Abstract:
Provided are an inter prediction method and a motion compensation method. The inter prediction method includes: performing inter prediction on a current image by using a long-term reference image stored in a decoded picture buffer; determining residual data and a motion vector of the current image generated via the inter prediction; and determining least significant bit (LSB) information as a long-term reference index indicating the long-term reference image by dividing picture order count (POC) information of the long-term reference image into most significant bit (MSB) information and the LSB information.