Abstract:
An apparatus and method for providing augmented reality-based realistic experience. The apparatus for providing augmented reality-based realistic experience includes a hardware unit and a software processing unit. The hardware unit includes a mirror configured to have a reflective characteristic and a transmissive characteristic, a display panel configured to present an image of an augmented reality entity, and a sensor configured to acquire information about a user. The software processing unit presents the augmented reality entity via the display panel based on the information about the user from the hardware unit after performing color compensation on the color of the augmented reality entity.
Abstract:
An image processing apparatus and method for calibrating a depth of a depth sensor. The image processing method may include obtaining a depth image of a target object captured by a depth sensor and a color image of the target object captured by a color camera; and calibrating a depth of the depth sensor by calibrating a geometrical relation between a projector and a depth camera, which are included in the depth sensor, based the obtained depth and color images and calculating a correct feature point on an image plane of the depth camera that corresponds to a feature point of an image plane of the projector.
Abstract:
Disclosed herein is a device and method for supporting 3D object printing and an apparatus for providing a 3D object printing service. A proposed device for supporting 3D object printing includes an information collection unit for collecting preference information of a user and performance information of a 3D printer. A download unit downloads a 3D model that is an object to be printed and model information defined in the 3D model in response to a printable selection signal. A model information creation unit creates new model information based on the 3D model and the model information defined in the 3D model. A print control command generation unit generates a print control command based on the preference information of the user and the performance information of the 3D printer, output from the information collection unit, and the new model information, output from the model information creation unit.
Abstract:
Method and apparatus for calibrating multiple cameras. A first mirror and a second mirror are arranged opposite each other. An object is disposed between the first mirror and the second mirror. Calibration of the multiple cameras is performed using a figure of the object formed on the second mirror via reflection from the first mirror and the second mirror.
Abstract:
A 3D scanning apparatus and method using lighting based on a smart phone. The 3D scanning apparatus includes: an image capturing unit for capturing the image of a 3D object using a camera and a lighting apparatus installed in a terminal; an image processing unit for generating a color-enhanced image corresponding to the light emitted by the lighting apparatus; and a scanning unit for scanning the 3D object in 3D by extracting a scan area from the color-enhanced image based on the light and by extracting position information corresponding to the scan area.
Abstract:
An apparatus includes: an input/output interface configured to have a reference surface model and a floating surface model inputted thereto; a memory having instructions for registration of the reference surface model and the floating surface model stored therein; and a processor configured for registration of the reference surface model and the floating surface model according to the instructions. The instructions perform: selecting initial transformation parameters corresponding to the floating surface model by comparing depth images of the reference surface model and the floating surface model; transforming the floating surface model according to the initial transformation parameters; calculating compensation transformation parameters through a matrix calculated by applying singular value decomposition to a cross covariance matrix between the reference surface model and the floating surface model; and transforming the floating surface model according to the compensation transformation parameters, and executing registration of the reference surface model and the floating surface model.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein are an apparatus and method for verifying and correcting degree of 3D-printed joints movement. A method for correcting a print position of a three-dimensional (3D) object is performed by a 3D object print position correction apparatus, and includes setting at least one adjacent mesh of a 3D object in which multiple shells are connected to each other through a joint structure, calculating movement degree information for the joint structure of the 3D object using the set adjacent mesh, and correcting a print position of the 3D object such that the print position matches the calculated movement degree information.
Abstract:
An apparatus and method for measuring the position of a stereo camera. The apparatus for measuring a position of the camera according to an embodiment includes a feature point extraction unit for extracting feature points from images captured by a first camera and a second camera and generating a first feature point list based on the feature points, a feature point recognition unit for extracting feature points from images captured by the cameras after the cameras have moved, generating a second feature point list based on the feature points, and recognizing actual feature points based on the first feature point list and the second feature point list, and a position variation measurement unit for measuring variation in positions of the cameras based on variation in relative positions of the actual feature points.