Abstract:
Disclosed is a video data decoding method comprising receiving a bitsream comprising encoded image information; decoding an image based on the encoded image information and obtaining luma data allocated to luma channels comprising a plurality of channels and chroma data allocated to a chroma channel comprising one channel from data generated by decoding the image; merging the obtained luma data as luma data having one component; splitting the obtained chroma data into chroma data having a plurality of components; and reconstructing the image based on the luma data having one component generated by merging the obtained luma data and the split chroma data having the plurality of components.
Abstract:
Provided are a method and apparatus for determining a merge mode by using motion information of a previous prediction unit. The method of determining a merge mode includes obtaining a merge mode cost of a lower depth based on a merge mode cost of a coding unit of an upper depth obtained by using motion information of a merge mode of the coding unit of the upper depth corresponding to a merge mode of the coding unit of the lower depth.
Abstract:
Provided are a method and an apparatus for decoding an image by accessing a memory by a block group. The method comprises checking information regarding a size of one or more blocks included in a bitstream of an encoded image, determining whether to group one or more blocks for performing decoding, based on the information regarding the size of the one or more blocks, setting a block group including one or more blocks based on the information regarding the size of the one or more blocks, and accessing a memory by the block group to perform a parallel-pipeline decoding process by the block group.
Abstract:
A video encoding method includes: receiving an image; up-sampling the received image; and changing a sample value of an up-sampling region included in the up-sampled image and encoding the up-sampled image by using the changed sample value, wherein the up-sampling region is a region inserted into the received image by the up-sampling.
Abstract:
A video decoding method includes obtaining a motion vector of a current block belonging to a first picture from a bitstream, performed by a first decoding unit; determining whether a reference block indicated by the motion vector is decoded, performed by the first decoding unit; and decoding the current block, based on whether the reference block is decoded. The reference block is included in a second picture decoded by a second decoding unit. The first picture and the second picture are decoded in parallel.
Abstract:
Provided are a method and a device for encoding a video to improve an intra prediction processing speed, and a method and a device for decoding the video. The method for encoding a video performs parallel intra prediction and includes: obtaining, by using pixels of peripheral blocks processed prior to a plurality of adjacent blocks, reference pixels used for intra prediction of each of the plurality of adjacent blocks; performing, by using the obtained reference pixels, intra prediction in parallel for each of the plurality of adjacent blocks; and adding reference pixel syntax information to a bitstream.
Abstract:
Provided is a processing device. The processing device includes a memory storing a video content, and a processor to divide a frame forming the video content into a plurality of coding units, and generate an encoded frame by performing encoding for each of the plurality of coding units, and the processor may add information including a motion vector that is obtained in the encoding process for each of the plurality of coding units to the encoded frame.
Abstract:
Provided is a video decoding method including obtaining a residue of a first bit-depth with respect to a current block by decoding a bitstream; when intra predicting the current block, generating a prediction block of the current block by using a block that is previously decoded at the first bit-depth and then stored in a buffer; and generating a reconstruction block of the first bit-depth by using the prediction block and the residue of the first bit-depth. When the current block is inter predicted, the video decoding method may further include generating a prediction block of a second bit-depth by using an image previously decoded at the second bit-depth, and generating the prediction block of the current block by changing the generated prediction block of the second bit-depth to the first bit-depth. The first bit-depth is higher than the second bit-depth.
Abstract:
A video encoding apparatus comprises an encoder encoding input video; a decoder decoding video data, and a filter to compensate for a pixel value of the encoded video data. An adaptive loop filter (ALF) parameter predictor generates an ALF filter parameter using the decoded video data. The ALF filter parameter is applied to an ALF filter to compensate a current pixel by using a pixel adjacent to the current pixel and a filter coefficient with respect to the neighboring pixel; a sample adaptive offset (SAO) filter unit applied to the decoded video data compensates for a current pixel by using at least one of an edge offset and a band offset; an ALF filter unit applies the ALF filter parameter, the ALF filter to video data to which the SAO filter has been applied; and an entropy encoder performs entropy encoding on the ALF filter parameter.
Abstract:
An image processing apparatus is provided. The image processing apparatus includes a signal processor and a controller. The signal processor processes an image signal including a plurality of color components. The controller controls the signal processor to perform a color gamut conversion, a domain transform, a quantization processing and an encoding processing with respect to an input image signal, and in response to differences between the color components of the image signal being less than a first critical level, to not perform the color gamut conversion.