Abstract:
Provided are an apparatus and a method of effectively creating real-time movements of a three dimensional virtual character by use of a small number of sensors. More specifically, the motion capture method, which maps movements of a human body into a skeleton model to generate movements of a three-dimensional (3D) virtual character, includes measuring a distance between a portion of a human body to which a measurement sensor is positioned and a reference position and rotation angles of the portion, and estimating relative rotation angles and position coordinates of each portion of the human body by use of the measured distance and rotation angles.
Abstract:
A three-dimensional (3D) pointing sensing apparatus may be provided. The 3D pointing sensing apparatus may include an image generation unit that may photograph a first light source and a second light source in a light emitting unit, and generate an image including an image of the first light source and an image of a second light source. Also, the 3D pointing sensing apparatus may include an orientation calculation unit that may calculate an orientation of the light emitting unit, using a size difference between the image of the first light source and the image of the second light source in the image.
Abstract:
A position estimation apparatus may measure 3-axis accelerations by at least two acceleration sensors disposed at different distances from a center of rotation of the position estimation apparatus, measure 3-axis angular velocity by a gyro sensor, and detect an azimuth angle using a geomagnetic sensor. Using the 3-axis accelerations measured by the acceleration sensors and the 3-axis angular velocity measured by the gyro sensor, the position estimation apparatus calculates gravity acceleration from which a rotational motion component is extracted.
Abstract:
A system and method for estimating a position and a direction using an infrared light are provided. The system may measure an intensity of an irradiation light irradiated by each light irradiator through each light receiver, and may estimate a position and a direction of a remote apparatus, based on the measured intensity, a light receiving directivity, and a light emitting directivity.
Abstract:
Disclosed is a moving object and a location measuring device and method thereof that may transmit an ultrasound signal to the moving object through a plurality of ultrasound transmitting units, and may estimate a location of the moving object at a current time based on distance information of distances between the moving object and the plurality of ultrasound transmitting units measured based on the transmitted ultrasound, inertia information, and location information of the moving object at a time prior to the current time.
Abstract:
An apparatus and method of estimating a three-dimensional (3D) position and orientation based on a sensor fusion process. The method of estimating the 3D position and orientation may include determining a position of a marker in a two-dimensional (2D) image, determining a depth of a position in a depth image corresponding to the position of the marker in the 2D image to be a depth of the marker, estimating a 3D position of the marker calculated based on the depth of the marker as a marker-based position of a remote apparatus, estimating an inertia-based position and an inertia-based orientation by receiving inertial information associated with the remote apparatus, estimating a fused position based on a weighted sum of the marker-based position and the inertia-based position, and outputting the fused position and the inertia-based orientation.
Abstract:
Provided are an apparatus and a method of effectively creating real-time movements of a three dimensional virtual character by use of a small number of sensors. More specifically, the motion capture method, which maps movements of a human body into a skeleton model to generate movements of a three-dimensional (3D) virtual character, includes measuring a distance between a portion of a human body to which a measurement sensor is positioned and a reference position and rotation angles of the portion, and estimating relative rotation angles and position coordinates of each portion of the human body by use of the measured distance and rotation angles.
Abstract:
An apparatus and method of estimating a three-dimensional (3D) position and orientation based on a sensor fusion process. The method of estimating the 3D position and orientation may include determining a position of a marker in a two-dimensional (2D) image, determining a depth of a position in a depth image corresponding to the position of the marker in the 2D image to be a depth of the marker, estimating a 3D position of the marker calculated based on the depth of the marker as a marker-based position of a remote apparatus, estimating an inertia-based position and an inertia-based orientation by receiving inertial information associated with the remote apparatus, estimating a fused position based on a weighted sum of the marker-based position and the inertia-based position, and outputting the fused position and the inertia-based orientation.
Abstract:
An apparatus and method for estimating a three-dimensional (3D) position and orientation based on a sensor fusion process is provided. The method of estimating the 3D position and orientation may include estimating a strength-based position and a strength-based orientation of a remote apparatus when a plurality of strength information is received, based on an attenuation characteristic of a strength that varies based on a distance and orientation, estimating an inertia-based position and an inertia-based orientation of the remote apparatus by receiving a plurality of inertial information, and estimating a fused position based on a weighted-sum of the strength-based position and the inertia-based position, and to estimate a fused orientation based on a weighted-sum of the strength-based orientation and the inertia-based orientation. The strength-based position and the strength-based orientation may be estimated based on a plurality of adjusted strength information from which noise is removed using a plurality of previous strength information.
Abstract:
A high precision signal sensing system and method using an infrared light is provided. The high precision signal sensing system may receive, from a light emitting device, a plurality of lights including a first light and a second light, may measure intensities of the first light and the second light, and may measure a light emitting intensity of the light emitting device based on an intensity difference between the measured light receiving intensities.