Abstract:
A multi-layer video encoding/decoding method and a multi-layer video encoding/decoding apparatus are provided. In the multi-layer video encoding method, image data is encoded to a multi-layer encoded image, at least one of encoded layers of a target output layer set is determined as an output layer, an index of at least three output layer subsets including at least one output layer from among the encoded layers of the target output layer set are generated based on the determined output layer, and a bitstream including the generated index and the multi-layer encoded image is generated.
Abstract:
A video decoding method includes determining, from among a first sample and a second sample with different color components, at least one second sample that is used to correct a value of the first sample; determining a filter parameter set based on a band including the value of the first sample, wherein the band is from among a plurality of bands determined by dividing a total range of sample values into predetermined intervals; and filtering a value of the at least one second sample by using the determined filter parameter set and correcting the value of the first sample by using a value obtained by the filtering, wherein the first sample is any one of a luma sample and a chroma sample, and the second sample is any one of the luma sample and the chroma sample that is not the first sample.
Abstract:
An interlayer video decoding method includes: reconstructing a color image and a depth image of a first layer in relation to coding information of the color image and the depth image of the first layer obtained from a bitstream; determining whether a prediction mode of a current block of a second layer image to be decoded is a view synthesis prediction mode that predicts the current block based on an image synthesized from a first layer image; determining a depth-based disparity vector indicating a depth-corresponding block of the first layer with respect to the current block, when the prediction mode is the view synthesis prediction mode; performing view synthesis prediction on the current block from the depth-corresponding block of the first layer indicated by the depth-based disparity vector; and reconstructing the current block by using a prediction block generated by prediction.
Abstract:
A multi-layer video coding method includes generating network abstraction layer (NAL) units for each data unit by dividing a multi-layer video according to data units, and adding scalable information to a video parameter set (VPS) NAL UNIT from among pieces of transmission unit data for each data unit.
Abstract:
Provided are a multi-view video decoding apparatus and method and a multi-view encoding apparatus and method. The decoding method includes: determining whether a prediction mode of a current block being decoded is a merge mode; when the prediction mode is determined to be the merge mode, forming a merge candidate list including at least one of an inter-view candidate, a spatial candidate, a disparity candidate, a view synthesis prediction candidate, and a temporal candidate; and predicting the current block by selecting a merge candidate for predicting the current block from the merge candidate list, wherein whether to include, in the merge candidate list, at least one of a view synthesis prediction candidate for an adjacent block of the current block and a view synthesis prediction candidate for the current block is determined based on whether view synthesis prediction is performed on the adjacent block and the current block.
Abstract:
An interlayer video decoding method comprises reconstructing a first layer image based on encoding information acquired from a first layer bitstream; reconstructing a second layer current block determined as a predetermined partition mode and a prediction mode by using interlayer prediction information acquired from a second layer bitstream and a first layer reference block corresponding to a current block of a first layer reconstruction image that is to be reconstructed in a second layer; determining whether to perform luminance compensation on the second layer current block in a partition mode in which the second layer current block is not split; and compensating for luminance of the second layer current block according to whether luminance compensation is performed and reconstructing a second layer image including the second layer current block of which luminance is compensated for.
Abstract:
A method of generating a parameter set includes obtaining common information commonly inserted into at least two lower parameter sets referring to the same upper parameter set; determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets; and adding the common information to at least one among the upper parameter set and the at least two lower parameter sets, based on a result of determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets.
Abstract:
Provided are an inter-layer video encoding method and apparatus therefor and an inter-layer video decoding method and apparatus therefor. An inter-layer video decoding method involves reconstructing a first layer image, based on encoding information obtained from a first layer bitstream; in order to reconstruct a second layer block determined as a predetermined partition type and to be in a prediction mode, determining whether to perform illumination compensation for the reconstructed second layer block determined by using a first layer reference block that is from among the reconstructed first layer image and corresponds to the second layer block; and generating the reconstructed second layer block by using inter-layer prediction information obtained from a second layer bitstream and the first layer reference block, and generating a second layer image including the reconstructed second layer block whose illumination is determined according to whether the illumination compensation was performed.
Abstract:
Provided are a material for forming a channel layer for a stretchable TFT, a method of preparing a channel layer for a stretchable TFT, a channel layer for a stretchable TFT, and a stretchable TFT. The material for forming the channel layer for the stretchable TFT includes an elastomer, an organic semiconductor material and a solvent. By mixing an elastomer and an organic semiconductor material and forming a thin film, a channel layer having an excellent conductivity and stretchability may be obtained.
Abstract:
A method of managing sessions between a plurality of cloud servers and a client by a multi-session managing apparatus. The method includes receiving respective screen images of the plurality of cloud servers through a multi-session with the plurality of cloud servers; generating a single bitstream by using the screen images; and transmitting the single bitstream to the client through a single session with the client.