Abstract:
Disclosed herein are an apparatus and method for obtaining spatial information using an active array lens. In order to obtain spatial information in the apparatus for obtaining spatial information including the active microlens, at least one active pattern for varying a microlens' focus is determined by controlling voltage applied to a pattern of the active microlens, and at least one projection image captured by the at least one active pattern is obtained in a time-division unit.
Abstract:
Provided is a method and system of transmitting and receiving a three-dimensional (3D) broadcasting service, and more particularly, to a method and system for transmitting and receiving a reference image and a 3D auxiliary image for the 3D broadcasting service through a digital broadcasting network in real-time or in non-real time. According to the present invention, a smooth broadcasting service may be provided to terminals receiving a two-dimensional (2D) or 3D broadcast.
Abstract:
An image encoding method according to the present disclosure may include generating an atlas based on at least one two-dimensional or three-dimensional image; and encoding the atlas and metadata for the atlas. In this case, the metadata may include information about a patch packed in the atlas, and the patch information may include information about a three-dimensional point projected on a two-dimensional patch.
Abstract:
A video encoding method according to the present disclosure includes classifying a plurality of view images into basic images and additional images, performing pruning on at least one of the plurality of view images on the basis of the classification result, generating an atlas based on the pruning results, and encoding the atlas and metadata for the atlas.
Abstract:
An image encoding method according to the present disclosure may include classifying a plurality of view images into a basic image and an additional image; performing pruning for at least one of the plurality of view images based on a result of the classification; generating an atlas based on a result of performing the pruning; and encoding the atlas and metadata for the atlas. In this case, the metadata may include spherical harmonic function information on a point in a three-dimensional space.
Abstract:
A method of processing an immersive video according to the present disclosure includes performing pruning for an input image, generating an atlas based on patches generated by the pruning and generating a cropped atlas by removing a background region of the atlas.
Abstract:
A video decoding method comprises receiving a plurality of atlases and metadata, unpacking patches included in the plurality of atlases based on the plurality of atlases and the metadata, reconstructing view images including an image of a basic view and images of a plurality of additional views, by unpruning the patches based on the metadata, and synthesizing an image of a target playback view based on the view images. The metadata is data related to priorities of the view images.
Abstract:
Disclosed herein a device tracking gaze and method therefor. The device includes: an image acquisition unit configured to obtain an eyeball image; a pupil detection unit configured to detect a center of pupil by using the eyeball image; a virtual corneal reflection light position generator configured to process the eyeball image so that a virtual corneal reflection light is located at a predetermined point in the eyeball image; and a PCVR vector generator configured to generate a pupil center virtual reflection vector (PCVR vector) based on a position of the pupil center and a position of the virtual corneal reflection light.
Abstract:
A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
Abstract:
Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.