Abstract:
Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
Abstract:
The present invention relates to a video encoding/decoding method and apparatus, and more particularly, to a method and apparatus for generating a reference image for a multiview video. The video encoding method includes, in the presence of a second image having a different view from a first image having a first view, transforming the second image to have the first view, generating a reference image by adding the second image to a side of the first image, and storing the reference image in a reference picture list.
Abstract:
Provided is a method and apparatus for outputting an additional image independently of a reference image in a broadcasting receiver including a communicator to receive a stream of an additional image of a three-dimensional (3D) broadcast in non-real time, and receive a stream of a reference image of the 3D broadcast in real-time and a processor to generate a 3D image of the 3D broadcast based on the stream of the additional image and the stream of the reference image.
Abstract:
Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
Abstract:
Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
Abstract:
A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.
Abstract:
A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.
Abstract:
The present invention relates to an optical information processing apparatus and to a method for controlling same. Provided is an optical information processing apparatus and a method for controlling same, the optical information processing apparatus comprising: a light source for irradiating light; an optical modulator for modulating the light irradiated from the light source; an optical system for collecting the light modulated by the optical modulator and enabling the modulated light to be incident to an optical medium; a stage on which the optical medium is placed; an optical detection unit for detecting the pattern of the light reflected from the optical medium; and a control unit for analyzing the pattern of the light detected by the optical detection unit so as to control the optical system and the location of the stage.
Abstract:
A method of processing an immersive video includes classifying view images into a basic image and an additional image, performing pruning with respect to view images by referring to a result of classification, generating atlases based on a result of pruning, generating a merged atlas by merging the atlases into one atlas, and generating configuration information of the merged atlas.
Abstract:
A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.