Abstract:
Provided are a method and a device for encoding a video to improve an intra prediction processing speed, and a method and a device for decoding the video. The method for encoding a video performs parallel intra prediction and includes: obtaining, by using pixels of peripheral blocks processed prior to a plurality of adjacent blocks, reference pixels used for intra prediction of each of the plurality of adjacent blocks; performing, by using the obtained reference pixels, intra prediction in parallel for each of the plurality of adjacent blocks; and adding reference pixel syntax information to a bitstream.
Abstract:
Provided are a method and an apparatus for decoding an image by accessing a memory by a block group. The method comprises checking information regarding a size of one or more blocks included in a bitstream of an encoded image, determining whether to group one or more blocks for performing decoding, based on the information regarding the size of the one or more blocks, setting a block group including one or more blocks based on the information regarding the size of the one or more blocks, and accessing a memory by the block group to perform a parallel-pipeline decoding process by the block group.
Abstract:
A video encoding method includes: receiving an image; up-sampling the received image; and changing a sample value of an up-sampling region included in the up-sampled image and encoding the up-sampled image by using the changed sample value, wherein the up-sampling region is a region inserted into the received image by the up-sampling.
Abstract:
Provided is a processing device. The processing device includes a memory storing a video content, and a processor to divide a frame forming the video content into a plurality of coding units, and generate an encoded frame by performing encoding for each of the plurality of coding units, and the processor may add information including a motion vector that is obtained in the encoding process for each of the plurality of coding units to the encoded frame.
Abstract:
Provided is a video decoding method including obtaining a residue of a first bit-depth with respect to a current block by decoding a bitstream; when intra predicting the current block, generating a prediction block of the current block by using a block that is previously decoded at the first bit-depth and then stored in a buffer; and generating a reconstruction block of the first bit-depth by using the prediction block and the residue of the first bit-depth. When the current block is inter predicted, the video decoding method may further include generating a prediction block of a second bit-depth by using an image previously decoded at the second bit-depth, and generating the prediction block of the current block by changing the generated prediction block of the second bit-depth to the first bit-depth. The first bit-depth is higher than the second bit-depth.
Abstract:
An image processing apparatus is provided. The image processing apparatus includes a signal processor and a controller. The signal processor processes an image signal including a plurality of color components. The controller controls the signal processor to perform a color gamut conversion, a domain transform, a quantization processing and an encoding processing with respect to an input image signal, and in response to differences between the color components of the image signal being less than a first critical level, to not perform the color gamut conversion.
Abstract:
Provided is a video decoding method for reconstructing an image, the video decoding method including: obtaining reference image data from a bitstream; determining an attribute of the reference image data as a long-term reference attribute or a short-term reference attribute, according to a frequency of referring to the reference image data by image data to be decoded; storing the reference image data in a memory by using the attribute of the reference image data; and decoding an image by using the reference image data stored in the memory.
Abstract:
A video encoding apparatus comprises an encoder encoding input video; a decoder decoding video data, and a filter to compensate for a pixel value of the encoded video data. An adaptive loop filter (ALF) parameter predictor generates an ALF filter parameter using the decoded video data. The ALF filter parameter is applied to an ALF filter to compensate a current pixel by using a pixel adjacent to the current pixel and a filter coefficient with respect to the neighboring pixel; a sample adaptive offset (SAO) filter unit applied to the decoded video data compensates for a current pixel by using at least one of an edge offset and a band offset; an ALF filter unit applies the ALF filter parameter, the ALF filter to video data to which the SAO filter has been applied; and an entropy encoder performs entropy encoding on the ALF filter parameter.
Abstract:
An image encoding method includes generating symbols by performing transformation and quantization according to a transformation block, on a block that performs prediction according to a prediction mode; updating a probability index of a current sub block by using a probability index of a previous sub block among sub blocks included in the transformation block; determining a rate according to a bit length of the current sub block by using the probably index; determining a rate of the transformation block by using rates of the sub blocks; determining a distortion by using a difference between an original image and a reconstruction image according to transformation and quantization; and determining a rate-distortion (R-D) cost by using the distortion and the rate.
Abstract:
A processing apparatus has a processor including a first memory. The processor divides a frame in video content into a plurality of coding units (CUs), and encodes the plurality of CUs in a diagonal direction to generate an encoded frame, wherein when a first CU is encoded based on a first encoding type, the processor is further configured to load, from a second memory, a first partial region of a reference frame corresponding to first position information of the first CU to the first memory and encode the first CU based on the first partial region of the reference frame loaded from the second memory, and wherein, when the first CU is encoded based on a second encoding type, the processor is further configured to encode the first CU based on a first reference pixel value corresponding to the first position information of the first CU from the first memory.