Abstract:
Disclosed are an apparatus and a method of producing a 3D model in which a 3D model having a static background is produced using a point cloud and an image obtained through 3D scanning. The apparatus includes an image matching unit for producing a matched image by matching a point cloud obtained by scanning a predetermined region to a camera image obtained by photographing the predetermined region; a mesh model processing unit for producing an object positioned in the region as a mesh model; and a 3D model processing unit for producing a 3D model for the object by reflecting texture information obtained from the matched image to the mesh model. The disclosed may be used for a 3D map service.
Abstract:
A method for calibrating a camera network includes generating a projection matrix by each of cameras in respect of a calibration pattern that is disposed at a plurality of different positions in a photography zone. A portion of the projection matrix is produced as a sub-projection matrix. Sub-projection matrixes are arranged widthwise and lengthwise to generate one sub-measurement matrix. A singular value decomposition (SVD) is performed on the sub-measurement matrix to change the sub-measurement matrix into a matrix having a rank of 3 and a SVD is performed on the changed sub-measurement matrix. A rotation value of the calibration pattern, and an internal parameter and a rotation value of each camera are extracted to thereby calibrate the camera network.
Abstract:
A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane.
Abstract:
A wireless power transmission apparatus includes at least one power transmission antenna for transmitting a wireless power signal in a magnetic resonance manner by using a resonant frequency having different bandwidths from each other; a wireless power signal generating module for generating the wireless power signal; at least one wireless power converting module for converting a power level of the wireless power signal generated by the wireless power signal generating module and having different power level conversion ranges corresponding to the bandwidth of the resonant frequency of the power transmission antenna; a multiplexer matching module for selectively connecting the wireless power converting module to a corresponding power transmission antenna; and a control unit for selectively connecting the power transmission antenna and the wireless power converting module according to a required power of a device to be charged to adjust the power level of the wireless power signal.
Abstract:
Disclosed herein is a 3D model shape transformation apparatus. The 3D model shape transformation apparatus includes a camera unit, a shape restoration unit, a skeleton structure generation unit, and a skeleton transformation unit. The camera unit obtains a plurality of 2D images in a single frame by capturing the shape of an object. The shape restoration unit generates a 3D volume model by restoring the shape of the object based on the plurality of 2D images. The skeleton structure generation unit generates the skeleton structure of the 3D volume model. The skeleton transformation unit transforms the size and posture of the 3D volume model into those of a template model by matching the skeleton structure of the template model with the skeleton structure of the 3D volume model.
Abstract:
A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane.
Abstract:
Disclosed herein is an apparatus for generating a digital actor based on multiple images. The apparatus includes a reconstruction appearance generation unit, a texture generation unit, and an animation assignment unit. The reconstruction appearance generation unit generates a reconstruction model in which the appearance of a target object is reconstructed in such a way as to extract 3-Dimensional (3D) geometrical information of the target object from images captured using multiple cameras which are provided in directions which are different from each other. The texture generation unit generates a texture image for the reconstruction model based on texture coordinates information calculated based on the reconstruction model. The animation assignment unit allocates an animation to each joint of the reconstruction model, which has been completed by applying the texture image to the reconstruction model, in such a way as to add motion data to the joint.
Abstract:
An apparatus for processing an effect using style lines, includes a contour line creation unit for creating contour lines using polygons of a three-dimensional (3D) object and location information of a camera; a style line creation unit for putting edge lists, extracted at the time of creating the contour lines, into groups, and creating one or more style lines for each of the groups; and an effect processing unit for representing a line style by inserting the created style lines inside and outside a contour line corresponding to the group. Therefore, representation to appear to have been drawn by a human hand is enabled by adding style lines to existing contour lines in order to represent a line style that belongs to various styles of non-photo realistic rendering.
Abstract:
There are provided a unified framework based on extensible styles for 3D non-photorealistic rendering and a method of configuring the framework. The unified framework includes: 3D model data processing means for generating a scene graph by converting a 3D model input into 3D data and organizing the scene graph using vertexes, faces, and edges; face painting means for selecting a brusher to paint faces (interiors) of the 3D model using the scene graph; line drawing means for extracting line information from the 3D model using the scene graph and managing the extracted line information; style expressing means for generating a rendering style for the 3D model and storing the rendering style as a stroke, the rendering style being equally applied to a face-painting method and a line-drawing method; and rendering means for combining the stroke and the selected brusher to render the 3D model using both the face-painting method and the line-drawing method. The framework can be used to develop tools and new rendering styles for non-photorealistic rendering and animation.
Abstract:
A method for producing cartoon animation using character animation and mesh deformation is provided. The system includes a motion analysis module, a mesh deformation module, a motion deformation module, and a skinning module. The motion analysis module receives existing motion data having information about non-deformed motions of a character, and extracts parameters from the existing motion data by analyzing an animation value that a character's each joint has. The mesh deformation module receives existing mesh data having information about an external appearance of a character and existing skinning data having information for cohering the parameters or bones with mesh, and generates deformed mesh data. The motion deformation module receives the existing motion data and deforms motion using the parameters. The skinning module receives the deformed mesh data, the deformed motion data, and the existing skinning data and generates character animation data having cartoon like motion.