Abstract:
Disclosed herein are an apparatus and method for generating a training data set for machine learning. The method for generating a training data set, performed by the apparatus for generating the training data set for machine learning, includes generating a 3D model for a deformed 3D character based on 3D data pertaining to the 3D character, generating a 2D image corresponding to the 3D model, and generating the training data set for machine learning, through which the 3D character is generated from the 2D image, using the 2D image and the 3D model.
Abstract:
Disclosed herein are an apparatus and method for guiding multi-view capture. The apparatus for guiding multi-view capture includes one or more processors and an execution memory for storing at least one program that is executed by the one or more processors, wherein the at least one program is configured to receive a single-view two-dimensional (2D) image obtained by capturing an image of an object of interest through a camera, generate an orthographic projection image and a perspective projection image for the object of interest from the single-view 2D image using an image conversion parameter that is previously learned from multi-view 2D images for the object of interest, generate a 3D silhouette model for the object of interest using the orthographic projection image and the perspective projection image, and output the 3D silhouette model and a guidance interface for the 3D silhouette model.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein are an apparatus and method for generating a 3D model. The apparatus for generating a 3D model includes one or more processors, and an execution memory for storing at least one program that is executed by the one or more processors, wherein the at least one program is configured to receive two-dimensional (2D) original image layers for respective viewpoints, and generate pieces of 2D original image information for respective objects by performing original image alignment on the 2D original image layers for respective viewpoints for each predefined object type, generate 3D model layers for respective objects from the pieces of 2D original image information for respective objects using multiple learning models corresponding to the predefined object types, and generate a 3D model by synthesizing the 3D model layers for respective objects.
Abstract:
Disclosed are an apparatus and a method for generating three-dimensional output data, in which the appearance or face of a user is easily restored in a three-dimensional manner by using one or a plurality of cameras including a depth sensor, a three-dimensional avatar for an individual, which is produced through three-dimensional model transition, and data capable of being three-dimensionally output, which is generated based on the three-dimensional avatar for an individual. The apparatus includes an acquisition unit that acquires a three-dimensional model based on depth information and a color image from at least one point of view, a selection unit that selects at least one of three-dimensional template models, and a generation unit that modifies at least one of a plurality of three-dimensional template models selected by the selection unit and generates three-dimensional output data based on the three-dimensional model acquired by the acquisition unit.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein is an apparatus and method for automatically creating a 3D personalized figure suitable for 3D printing by detecting a face area and features for respective regions from face data acquired by heterogeneous sensors and by optimizing global/local transformation. The 3D personalized figure creation apparatus acquires face data of a user corresponding to a reconstruction target; extracts feature points for respective regions from the face data, and reconstructs unique 3D models of the user's face, based on the extracted feature points; creates 3D figure models based on the unique 3D models and previously stored facial expression models and body/adornment models; and verifies whether each 3D figure model has a structure and a shape corresponding to actual 3D printing, corrects and edits the 3D figure model based on results of verification, and outputs a 3D figure model corresponding to 3D printing.
Abstract:
Disclosed herein are an apparatus and method for reconstructing a three-dimensional (3D) face based on multiple cameras. The apparatus includes a multi-image analysis unit, a texture image separation unit, a reconstruction image automatic synchronization unit, a 3D appearance reconstruction unit, and a texture processing unit. The multi-image analysis unit determines the resolution information of images received from a plurality of cameras, and determines whether the images have been synchronized with each other. The texture image separation unit separates a texture processing image by comparing the resolutions of the received images. The reconstruction image automatic synchronization unit synchronizes images that are determined to be asynchronous images by the multi-image analysis unit. The 3D appearance reconstruction unit computes the 3D coordinate values of the synchronized images, and reconstructs a 3D appearance image. The texture processing unit reconstructs a 3D image by mapping the texture processing image to the 3D appearance image.