Abstract:
Disclosed herein are an apparatus and method for generating a 3D avatar. The method, performed by the apparatus, includes performing a 3D scan of the body of a user using an image sensor and generating a 3D scan model using the result of the 3D scan of the body of the user, matching the 3D scan model and a previously stored template avatar, and generating a 3D avatar based on the result of matching the 3D scan model and the template avatar.
Abstract:
According to one general aspect, an apparatus for extracting an object includes an image receiver configured to receive an image; a coupled saliency-map generator configured to generate a coupled saliency-map which is the sum of the product of a global saliency-map of the image and a predetermined weight value and a local saliency-map; an adaptive tri-map generator configured to generate an adaptive tri-map corresponding to the coupled saliency-map; an alpha matte generator configured to generate an alpha matte based on the adaptive tri-map; and an object detector configured to extract an object according to transparency of the alpha matte to generate an object image.
Abstract:
Disclosed herein is an apparatus for controlling transmit power, including: a global positioning system (GPS) receiving unit receiving GPS signals from one or more satellites and measuring signal strengths of the GPS signals; a processor calculating transmit power corresponding to a current position depending on the GPS signals according to a predefined command; a memory storing the command therein; and a communication interface transmitting a data signal including data depending on the transmit power.
Abstract:
Disclosed herein are a virtual content-mixing method for augmented reality and an apparatus for the same. The virtual content-mixing method includes generating lighting physical-modeling data based on actual lighting information for outputting virtual content, generating camera physical-modeling data by acquiring a plurality of parameters corresponding to a camera, and mixing the virtual content with an image that is input through an RGB camera, based on the lighting physical-modeling data and the camera physical-modeling data.
Abstract:
Disclosed herein are a virtual content-mixing method for augmented reality and an apparatus for the same. The virtual content-mixing method includes generating lighting physical-modeling data based on actual lighting information for outputting virtual content, generating camera physical-modeling data by acquiring a plurality of parameters corresponding to a camera, and mixing the virtual content with an image that is input through an RGB camera, based on the lighting physical-modeling data and the camera physical-modeling data.
Abstract:
An apparatus and method for extracting an object of interest from an image using image matting are disclosed herein. The apparatus for extracting an object of interest from an image using image matting includes a saliency map generation unit, a trimap generation unit, and an alpha map generation unit. The saliency map generation unit generates a saliency map corresponding to an object of interest inside an input image using a color space probability distribution corresponding to the input image. The trimap generation unit generates meta-trimaps using filters, and generates a trimap by clustering the meta-trimaps. The alpha map generation unit generates an alpha map using the trimap and a matting Laplacian matrix, and extracts the object of interest based on image matting using the alpha map and the input image.