Abstract:
Disclosed herein is a method for automatically recognizing correspondence between calibration pattern feature points in camera calibration. The method for automatically recognizing correspondence between calibration pattern feature points in camera calibration includes generating calibration pattern candidates from an input image using a point-line-face-multi-face hierarchical structure, and performing verification on the calibration pattern candidates based on a preset verification condition.
Abstract:
Disclosed are an apparatus and a method for generating three-dimensional output data, in which the appearance or face of a user is easily restored in a three-dimensional manner by using one or a plurality of cameras including a depth sensor, a three-dimensional avatar for an individual, which is produced through three-dimensional model transition, and data capable of being three-dimensionally output, which is generated based on the three-dimensional avatar for an individual. The apparatus includes an acquisition unit that acquires a three-dimensional model based on depth information and a color image from at least one point of view, a selection unit that selects at least one of three-dimensional template models, and a generation unit that modifies at least one of a plurality of three-dimensional template models selected by the selection unit and generates three-dimensional output data based on the three-dimensional model acquired by the acquisition unit.
Abstract:
Disclosed herein is a device and method for supporting 3D object printing and an apparatus for providing a 3D object printing service. A proposed device for supporting 3D object printing includes an information collection unit for collecting preference information of a user and performance information of a 3D printer. A download unit downloads a 3D model that is an object to be printed and model information defined in the 3D model in response to a printable selection signal. A model information creation unit creates new model information based on the 3D model and the model information defined in the 3D model. A print control command generation unit generates a print control command based on the preference information of the user and the performance information of the 3D printer, output from the information collection unit, and the new model information, output from the model information creation unit.
Abstract:
An apparatus includes: an input/output interface configured to have a reference surface model and a floating surface model inputted thereto; a memory having instructions for registration of the reference surface model and the floating surface model stored therein; and a processor configured for registration of the reference surface model and the floating surface model according to the instructions. The instructions perform: selecting initial transformation parameters corresponding to the floating surface model by comparing depth images of the reference surface model and the floating surface model; transforming the floating surface model according to the initial transformation parameters; calculating compensation transformation parameters through a matrix calculated by applying singular value decomposition to a cross covariance matrix between the reference surface model and the floating surface model; and transforming the floating surface model according to the compensation transformation parameters, and executing registration of the reference surface model and the floating surface model.
Abstract:
Disclosed herein is an apparatus and method for estimating the joint structure of a human body. The apparatus includes a multi-view image acquisition unit for receiving multi-view images acquired by capturing a human body. A human body foreground separation unit extracts a foreground region corresponding to the human body from the acquired multi-view images. A human body shape restoration unit restores voxels indicating geometric space occupation information of the human body using the foreground region corresponding to the human body, thus generating voxel-based three-dimensional (3D) shape information of the human body. A skeleton information extraction unit generates 3D skeleton information from the generated voxel-based 3D shape information of the human body. A skeletal structure estimation unit estimates positions of respective joints from a skeletal structure of the human body using both the generated 3D skeleton information and anthropometric information.
Abstract:
Disclosed herein are a method and apparatus for providing a user-customized augmented-reality service. The method is configured to extract the basic ergonomic information of a user by sensing the body of the user who is using an augmented-reality service, to generate user-customized ergonomic information by modifying the basic ergonomic information of the user based on at least one of misrecognition occurring in predefined basic interaction and user evaluation information, to detect the physical characteristics of the user by comparing the basic ergonomic information of the user with the user-customized ergonomic information, to define user-customized interaction by reflecting the physical characteristics of the user in the predefined basic interaction, to extract the unique characteristics of the user from usage data accumulated through the user-customized interaction, and to update the user-customized interaction so as to match the unique characteristics of the user.
Abstract:
Disclosed herein are an apparatus and method for calibrating an augmented-reality image. The apparatus includes a camera unit for capturing an image and measuring 3D information pertaining to the image, an augmented-reality image calibration unit for generating an augmented-reality image using the image and the 3D information and for calibrating the augmented-reality image, and a display unit for displaying the augmented-reality image.
Abstract:
Disclosed herein are an apparatus and method for reconstructing a three-dimensional (3D) face based on multiple cameras. The apparatus includes a multi-image analysis unit, a texture image separation unit, a reconstruction image automatic synchronization unit, a 3D appearance reconstruction unit, and a texture processing unit. The multi-image analysis unit determines the resolution information of images received from a plurality of cameras, and determines whether the images have been synchronized with each other. The texture image separation unit separates a texture processing image by comparing the resolutions of the received images. The reconstruction image automatic synchronization unit synchronizes images that are determined to be asynchronous images by the multi-image analysis unit. The 3D appearance reconstruction unit computes the 3D coordinate values of the synchronized images, and reconstructs a 3D appearance image. The texture processing unit reconstructs a 3D image by mapping the texture processing image to the 3D appearance image.
Abstract:
An image processing apparatus and method for calibrating a depth of a depth sensor. The image processing method may include obtaining a depth image of a target object captured by a depth sensor and a color image of the target object captured by a color camera; and calibrating a depth of the depth sensor by calibrating a geometrical relation between a projector and a depth camera, which are included in the depth sensor, based the obtained depth and color images and calculating a correct feature point on an image plane of the depth camera that corresponds to a feature point of an image plane of the projector.
Abstract:
Disclosed herein is an apparatus and method for automatically creating a 3D personalized figure suitable for 3D printing by detecting a face area and features for respective regions from face data acquired by heterogeneous sensors and by optimizing global/local transformation. The 3D personalized figure creation apparatus acquires face data of a user corresponding to a reconstruction target; extracts feature points for respective regions from the face data, and reconstructs unique 3D models of the user's face, based on the extracted feature points; creates 3D figure models based on the unique 3D models and previously stored facial expression models and body/adornment models; and verifies whether each 3D figure model has a structure and a shape corresponding to actual 3D printing, corrects and edits the 3D figure model based on results of verification, and outputs a 3D figure model corresponding to 3D printing.