Abstract:
A method of converting a two-dimensional video to a three-dimensional video, the method comprising: comparing an image of an nth frame with an accumulated image until an n−1th frame in the two-dimensional video to calculate a difference in a color value for each pixel; generating a difference image including information on a change in a color value for each pixel of the nth frame; storing an accumulated image until the nth frame by accumulating the information on the change in the color value for each pixel until the nth frame; performing an operation for a pixel in which a change in a color value is equal to or larger than a predetermined level by using the difference image to generate a division image and a depth map image; and converting the image of the nth frame to a three-dimensional image by using the depth map image.
Abstract:
Disclosed is an apparatus and method of segmenting an object. An object segmentation method according to the present disclosure includes: receiving an input image; receiving a user input indicating at least one piece of information on a foreground region and a background region included in the input image; generating at least one among a foreground pixel list and a background pixel list using the received user input; calculating Gaussian distribution of at least one pixel that makes up the input image using at least one among the generated foreground pixel list and background pixel list; and determining whether the at least one pixel is a foreground pixel or a background pixel using the calculated Gaussian distribution.
Abstract:
Disclosed are an apparatus and a method for extracting a foreground layer from an image sequence that extract a foreground object layer area in which a depth value is discontinuous with that of a background from an input image sequence. By using the present disclosure, the layer area is automatically tracked in the subsequent frames through user's setting in the start frame in the image sequence in which the depth values of the foreground and the background are discontinuous, thereby extracting the foreground layer area in which the drift phenomenon and the flickering phenomenon are reduced.
Abstract:
An arbitrary viewpoint image generation method includes obtaining an original image set photographed by a plurality of cameras included in a camera array at each of at least one focal distance at a same time point; obtaining a multi-focus image set by generating a multi-focus image from the original image set for each of the at least one focal distance; and generating an arbitrary viewpoint image at a position where a viewpoint is to be moved from the multi-focus image set.
Abstract:
A method of increasing a photographing speed of a photographing device which capture an image through a combination of two or more photographing devices and generate and provide an image by using the captured image, thereby increasing a photographing speed. An RGB image obtaining device and a depth image obtaining device alternately perform photographing to obtain an image. Also, a second depth image and a second RGB image respectively corresponding to a first RGB image and a first depth image which are alternately obtained by performing alternate photographing are synthesized and output, thereby actually increasing a photographing speed by twice.
Abstract:
Disclosed are an apparatus and a method for extracting a foreground layer from an image sequence that extract a foreground object layer area in which a depth value is discontinuous with that of a background from an input image sequence. By using the present disclosure, the layer area is automatically tracked in the subsequent frames through user's setting in the start frame in the image sequence in which the depth values of the foreground and the background are discontinuous, thereby extracting the foreground layer area in which the drift phenomenon and the flickering phenomenon are reduced.