Abstract:
A camera auxiliary device for privacy protection and a privacy protection method using the camera auxiliary device. The camera auxiliary device for privacy protection includes a processor for splitting an input light beam that is reflected from a capturing target into a first input beam for detecting a privacy protection area and a second input beam to be transferred to a camera connected to a user terminal, detecting a privacy protection area in an image signal generated based on the first input beam, and converting the second input beam and then transferring a converted second input beam to the camera so that personal information included in the privacy protection area is not visually identified, and a memory for storing the image signal and the privacy protection area.
Abstract:
Disclosed herein are an apparatus and method for reconstructing a three-dimensional (3D) face based on multiple cameras. The apparatus includes a multi-image analysis unit, a texture image separation unit, a reconstruction image automatic synchronization unit, a 3D appearance reconstruction unit, and a texture processing unit. The multi-image analysis unit determines the resolution information of images received from a plurality of cameras, and determines whether the images have been synchronized with each other. The texture image separation unit separates a texture processing image by comparing the resolutions of the received images. The reconstruction image automatic synchronization unit synchronizes images that are determined to be asynchronous images by the multi-image analysis unit. The 3D appearance reconstruction unit computes the 3D coordinate values of the synchronized images, and reconstructs a 3D appearance image. The texture processing unit reconstructs a 3D image by mapping the texture processing image to the 3D appearance image.
Abstract:
Disclosed herein is an apparatus and method for automatically creating a 3D personalized figure suitable for 3D printing by detecting a face area and features for respective regions from face data acquired by heterogeneous sensors and by optimizing global/local transformation. The 3D personalized figure creation apparatus acquires face data of a user corresponding to a reconstruction target; extracts feature points for respective regions from the face data, and reconstructs unique 3D models of the user's face, based on the extracted feature points; creates 3D figure models based on the unique 3D models and previously stored facial expression models and body/adornment models; and verifies whether each 3D figure model has a structure and a shape corresponding to actual 3D printing, corrects and edits the 3D figure model based on results of verification, and outputs a 3D figure model corresponding to 3D printing.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.
Abstract:
Disclosed herein are an apparatus and method for guiding multi-view capture. The apparatus for guiding multi-view capture includes one or more processors and an execution memory for storing at least one program that is executed by the one or more processors, wherein the at least one program is configured to receive a single-view two-dimensional (2D) image obtained by capturing an image of an object of interest through a camera, generate an orthographic projection image and a perspective projection image for the object of interest from the single-view 2D image using an image conversion parameter that is previously learned from multi-view 2D images for the object of interest, generate a 3D silhouette model for the object of interest using the orthographic projection image and the perspective projection image, and output the 3D silhouette model and a guidance interface for the 3D silhouette model.