Abstract:
An image encoding method includes generating a first frequency coefficient matrix by transforming a predetermined block to a frequency domain; determining whether the first frequency coefficient matrix includes coefficients whose absolute values are greater than a predetermined value; generating a second frequency coefficient matrix by selectively partially switching at least one of rows and columns of the first frequency coefficient matrix according to an angle parameter based on a determination result; and selectively encoding the second frequency coefficient matrix based on the determination result.
Abstract:
Provided are video encoding and video decoding for performing clipping on data during inverse quantization and inverse transformation according to blocks. An inverse transformation method includes: receiving quantized transformation coefficients of a current block; clipping transformation coefficients generated by inverse-quantizing the quantized transformation coefficients to a range of a first maximum value and a first minimum value that are determined based on a size of the current block; clipping intermediate data generated by performing first inverse transformation on the clipped transformation coefficients by using a first inverse transformation matrix to a range of a second maximum value and a second minimum value that are determined based on the size of the current block and in internal bit depth; and performing second inverse transformation on the clipped intermediate data by using a second inverse transformation matrix.
Abstract:
A video encoding method and apparatus and a video decoding method and apparatus. In the video encoding method, a first predicted coding unit of a current coding unit that is to be encoded is produced, a second predicted coding unit is produced by changing a value of each pixel of the first predicted coding unit by using each pixel of the first predicted coding unit and at least one neighboring pixel of each pixel, and the difference between the current coding unit and the second predicted coding unit is encoded, thereby improving video prediction efficiency.
Abstract:
Provided is a video decoding method including: obtaining a first motion vector indicating a first reference block of a current block in a first reference picture and a second motion vector indicating a second reference block of the current block in a second reference picture; obtaining a parameter related to pixel group unit motion compensation of the current block, based on at least one of information of the parameter related to the pixel group unit motion compensation and a parameter related to an image including the current picture; generating a prediction block by performing, with respect to the current block, block unit motion compensation based on the first motion vector and the second motion vector and performing the pixel group unit motion compensation based on the parameter related to the pixel group unit motion compensation; and reconstructing the current block. Here, a pixel group may include at least one pixel.
Abstract:
An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.
Abstract:
A method and apparatus for performing transformation and inverse transformation on a current block by using multi-core transform kernels in video encoding and decoding processes. A video decoding method may include obtaining, from a bitstream, multi-core transformation information indicating whether multi-core transformation kernels are to be used according to a size of a current block; obtaining horizontal transform kernel information and vertical transform kernel information from the bitstream when the multi-core transformation kernels are used according to the multi-core transformation information; determining a horizontal transform kernel for the current block according to the horizontal transform kernel information; determining a vertical transform kernel for the current block according to the vertical transform kernel information; and performing inverse transformation on the current block by using the horizontal transform kernel and the vertical transform kernel.
Abstract:
Provided is a video decoding method for signaling of sample adaptive offset (SAO) parameters, the video decoding method including obtaining, from a bitstream, position information of each of a plurality of bandgroups with respect to a current block comprised in a video; obtaining, from the bitstream, offsets with respect to bands comprised in each of the plurality of bandgroups; determining the plurality of bandgroups so as to compensate for a pixel sample value of the current block, based on the position information of each of the plurality of bandgroups; and compensating for a sample value of a reconstructed pixel comprised in the current block, by using the obtained offsets. In this regard, each of the plurality of determined bandgroups comprises at least one band.
Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
A method for decoding an image including performing intra prediction on a chrominance block according to whether the intra prediction mode of the chrominance block is equal to an intra prediction mode of a luminance block.
Abstract:
Provided is an image encoding method including determining at least one sample value related to a first block, based on sample values of previously-reconstructed reference samples; determining at least one pattern in which samples of the first block are to be arranged; generating one or more candidate prediction blocks for the first block, based on the at least one sample value and the at least one pattern; and determining prediction values of the samples of the first block, based on one of the one or more candidate prediction blocks.