Abstract:
A method of converting a two-dimensional video to a three-dimensional video, the method comprising: comparing an image of an nth frame with an accumulated image until an n−1th frame in the two-dimensional video to calculate a difference in a color value for each pixel; generating a difference image including information on a change in a color value for each pixel of the nth frame; storing an accumulated image until the nth frame by accumulating the information on the change in the color value for each pixel until the nth frame; performing an operation for a pixel in which a change in a color value is equal to or larger than a predetermined level by using the difference image to generate a division image and a depth map image; and converting the image of the nth frame to a three-dimensional image by using the depth map image.
Abstract:
Disclosed herein are a method and apparatus for matching 3D terrain information based on aerial images captured at different altitudes. The method includes receiving a high-altitude numerical height model based on a terrain image captured at a specific high altitude; receiving 3D terrain information observed from a low altitude, which is generated based on a terrain image captured at an altitude lower than the specific high altitude; generating a low-altitude numerical height model by converting the 3D terrain information into a numerical model in the same form as the high-altitude numerical height model; measuring the cross-correlation between the high-altitude numerical height model and the low-altitude numerical height model, thereby calculating matching parameters for enabling the low-altitude numerical height model to match the high-altitude numerical height model; and adjusting the geospatial coordinates of the 3D terrain information based on the matching parameters and outputting georeferenced 3D terrain information in the same coordinate system as the high-altitude numerical height model.
Abstract:
Disclosed are an apparatus and a method for extracting a foreground layer from an image sequence that extract a foreground object layer area in which a depth value is discontinuous with that of a background from an input image sequence. By using the present disclosure, the layer area is automatically tracked in the subsequent frames through user's setting in the start frame in the image sequence in which the depth values of the foreground and the background are discontinuous, thereby extracting the foreground layer area in which the drift phenomenon and the flickering phenomenon are reduced.
Abstract:
Disclosed are an apparatus and a method for extracting a foreground layer from an image sequence that extract a foreground object layer area in which a depth value is discontinuous with that of a background from an input image sequence. By using the present disclosure, the layer area is automatically tracked in the subsequent frames through user's setting in the start frame in the image sequence in which the depth values of the foreground and the background are discontinuous, thereby extracting the foreground layer area in which the drift phenomenon and the flickering phenomenon are reduced.
Abstract:
Disclosed herein are an apparatus and method for reconstructing a 3D model. The apparatus for reconstructing a 3D model includes an image acquisition unit for acquiring multi-view images by receiving image signals captured by multiple drones using cameras, a geometric calibration unit for estimating motion variables of the drones based on the acquired multi-view images, and a 3D model creation unit for reconstructing a 3D model of a dynamic object from the matched multi-view images using a multi-view stereo method.