Abstract:
A method of acquiring geometry of a specular object is provided. Based on a single-view depth image, the method may include receiving an input of a depth image, estimating a missing depth value based on connectivity with a neighboring value in a local area of the depth image, and correcting the missing depth value. Based on a composite image, the method may include receiving an input of a composite image, calibrating the composite image, detecting an error area in the calibrated composite image, and correcting a missing depth value of the error area.
Abstract:
An apparatus and method for encoding a 3D mesh, and an apparatus and method for decoding the 3D mesh are disclosed. The 3D mesh encoding apparatus may determine mesh information including position information of each of vertices constituting the 3D mesh, and connectivity information among the vertices, based on a level, and may progressively encode the determined mesh information based on the level, thereby reducing an error with an original 3D object when compared to an equal transmission rating.
Abstract:
An endoscope using depth information and a method for detecting a polyp based on the endoscope using the depth information are provided. The endoscope using the depth information may generate an irradiated light signal including a visible light, obtain depth information based on the irradiated light signal and a reflected light signal obtained through the irradiated light signal being reflected off of an intestine wall, generate a depth image inside the intestine wall based on the depth information, and detect a polyp located on the intestine wall based on the depth image.
Abstract:
An apparatus and method for reconstructing a super-resolution three-dimensional (3D) image from a depth image. The apparatus may include an error point relocation processing unit to relocate an error point in a depth image, and a super-resolution processing unit to reconstruct a 3D image by performing a super-resolution with respect to the depth image in which the error point is relocated.
Abstract:
An apparatus and method for encoding a 3D mesh, and an apparatus and method for decoding the 3D mesh are disclosed. The 3D mesh encoding apparatus may determine mesh information including position information of each of vertices constituting the 3D mesh, and connectivity information among the vertices, based on a level, and may progressively encode the determined mesh information based on the level, thereby reducing an error with an original 3D object when compared to an equal transmission rating.
Abstract:
An apparatus and method for out-focusing a color image based on a depth image, the method including receiving an input of a depth region of interest (ROI) desired to be in focus for performing out-focusing in the depth image, and applying different blur models to pixels corresponding to the depth ROI, and pixels corresponding to a region, other than the depth ROI, in the color image, thereby performing out-focusing on the depth ROI.
Abstract:
A method and system for generating an augmented reality (AR) scene may include obtaining real world information including multimedia information and sensor information associated with a real world, loading an AR locator representing a scheme for mixing the real world information and at least one virtual object content and the real world information onto an AR container, obtaining the at least one virtual object content corresponding to the real world information using the AR locator from a local storage or an AR contents server, and visualizing AR information by mixing the real world information and the at least one virtual object content based on the AR locator.
Abstract:
An apparatus and method of correcting an image are provided. The apparatus includes a receiver to receive a depth value and a luminous intensity, the depth value and the luminous intensity being measured by at least one depth sensor, and a correction unit to read a correction depth value of a plurality of correction depth values mapped to different depth values and different luminous intensities from a first storage unit and to correct the measured depth value using the read correction depth value, the correction depth value being mapped to the measured depth value and the measured luminous intensity.