Abstract:
A marker-free motion capture apparatus having a function of correcting a tracking error and a method thereof are disclosed. The apparatus includes: a grouping unit for grouping feature candidates located within a threshold distance on a three-dimensional space at a previous time; a feature point selecting unit for generating a first curve connecting a predetermined number of feature points, and selecting a feature candidate closest to the first curve as a feature point of a previous time; a feature point correcting unit for generating a second curve connecting a predetermined number of feature points including the feature point of a previous time, and correcting a feature point of a current time calculated based on a Kalman filtering scheme using the second curve; and a controlling unit for calculating a location of a feature point of each time using a Kalman filtering scheme and generally controlling the marker-free motion capture apparatus.
Abstract:
Provided is an apparatus and method for determining stereo disparity based on two-path dynamic programming and GGCP. The apparatus includes a pre-processing unit for analyzing texture distribution of an input image by using a Laplacian of Gaussian (LOG) filter and dividing the input image into a homogeneous region and a non-homogeneous region; a local matching unit for determining candidate disparities to be included in an each pixel of all pixels; a local post-processing unit for removing candidate disparities in a pixel of low reliability by performing a visibility test betweens candidate disparities in each pixel to improve the reliability of the candidate disparity; and a global optimizing unit for determining a final disparity for candidate disparities in an each pixel by performing a dynamic programming.
Abstract:
Provided is a method for generating a three-dimensional (3D) mesh based on unorganized sparse 3D points to generate a mesh model that displays a 3D surface by using unorganized sparse 3D points extracted from a plurality of two-dimensional image. The 3D mesh generating method based on unorganized sparse 3D points includes the steps of: receiving a plurality of unorganized sparse 3D points, a plurality of two-dimensional (2D) corresponding point information, and images; generating an initial mesh by using the received 2D corresponding information; removing an abnormal face from the initial mesh; checking if unused 2D corresponding point information exists among the received 2D corresponding point information; if unused 2D corresponding point information exists, reorganizing the initial mesh by performing a constrained Delaunay triangulation; and if unused 2D corresponding point information does not exist in the result of the confirmation, generating a final mesh.
Abstract:
A three-dimensional animation system using evolutionary computation includes a gene determination unit and a motion generation unit. The gene determination unit calculates modified gene information by receiving at least one genes and modifying the genes evolutionarily. The motion generation unit receives motion data and modifies the motion data based on the modified gene information. A three-dimensional animation method is also disclosed.
Abstract:
A marker-free motion capture apparatus having a function of correcting a tracking error and a method thereof are disclosed. The apparatus includes: a grouping unit for grouping feature candidates located within a threshold distance on a three-dimensional space at a previous time; a feature point selecting unit for generating a first curve connecting a predetermined number of feature points, and selecting a feature candidate closest to the first curve as a feature point of a previous time; a feature point correcting unit for generating a second curve connecting a predetermined number of feature points including the feature point of a previous time, and correcting a feature point of a current time calculated based on a Kalman filtering scheme using the second curve; and a controlling unit for calculating a location of a feature point of each time using a Kalman filtering scheme and generally controlling the marker-free motion capture apparatus.
Abstract:
Provided is a background recovering apparatus for deleting an object of an arbitrary region and recovering a background covered with the object. The apparatus includes a storing block for storing the input image sequence, and storing the three dimensional (3D) position and posture information, and focal length information; a geometric information extracting block for extracting 3D with respect to the background; a background image generating block for generating the background image; and a background image inserting block for recovering the background by inserting the background image into an arbitrary region of a first view point.
Abstract:
A method for estimating three-dimensional positions of human joints includes the steps of: a) marker-free motion capturing a moving figure for obtaining a multiview 2D image of the moving figure, and extracting a 2D feature point corresponding to a bodily end-effector; b) three-dimensionally matching the 2D feature point corresponding to the bodily end-effector, and recovering the 3D coordinates of the bodily end-effector; c) generating a 3D blob of the bodily end-effector, generating a virtual sphere with a radius that is a distance from a center of the 3D blob to a joint, and projecting the virtual sphere onto the obtained multiview 2D image of the moving figure; and d) detecting a coinciding point of a surface of the projected virtual sphere and the multiview 2D image of the moving figure, and estimating a 3D position corresponding to the coinciding point as a 3D position of the joint.
Abstract:
Provided is an image-based volume data carving method for rapidly carving a specific area of dimensional volume data based on images. The method includes the steps of: generating a mask image to be carved from an input image; dividing a viewing transform matrix of the mask image into a shear transform matrix and a warp transform matrix, and calculating a scale factor from the shear transform matrix; modifying the mask image to be parallel to an axis of the volume data; shearing a volume slice in such a manner that the volume data can be parallel to viewing rays passing through a volume, and scaling the size of the volume slice; and carving part of the volume slice through an operation between the mask image and each volume slice.
Abstract:
A method for reconstructing a three-dimensional structure using silhouette information on a two-dimensional plane is provided. The method includes: obtaining silhouette images; creating a cube on a three-dimensional space using the silhouette images; calculating vertex coordinates on a two-dimensional image plane by projecting eight vertices of the three-dimensional cube on a two-dimensional image plane of a first camera; dividing into multiple inner voxels by dividing sides formed by the eight vertices by a divider; dividing into a predetermined number of regions by dividing sides connecting the coordinates by a predetermined divider; assigning indices by matching cubes of the three-dimensional cube to square regions on the two-dimensional image plane in one to one manner; storing indices of regions where the square regions meets with a first silhouette image; and reconstructing three-dimensional structure by finding common indices through repeatedly performing the steps using other silhouette images.
Abstract:
Provided is a method for magnifying an image by interpolation. The method including: a) setting m×m local windows and calculating a direction of each m×m local window; b) when a linear direction exists in an m×m local window, considering an edge exists; c) when a linear direction does not exist in the m×m local window, dividing the m×m local window into m/2×m/2 sub windows and calculating directions of the m/2×m/2 sub windows; d) when the directions of the m/2×m/2 sub windows exists toward the center of the m×m local window, considering a corner exists in the m×m local window; and e) selecting pixels located in a virtual line that goes along in the linear direction or in the directions to calculate a new pixel value by using the pixels.