Abstract:
In an apparatus and method for processing a virtual world, haptic information regarding a virtual object in the virtual world, the haptic information corresponding to sensed information, is extracted and transmitted to a haptic feedback device. Accordingly, interaction between a real world and the virtual world is achieved. The processing speed of the haptic information with respect to the virtual object may be increased by varying data structures according to types of the virtual object.
Abstract:
Disclosed are a virtual world processing device and method. By way of example, data collected from the real world is converted to binary form data which is then transmitted, or is converted to XML data, or the converted XML data is further converted to binary form data which is then transmitted, thereby allowing the data transmission rate to be increased and a low bandwidth to be used, and, in the case of a data-receiving adaptation RV engine, the complexity of the adaptation RV engine can be reduced as there is no need to include an XML parser.
Abstract:
A method of learning a parameter to estimate a posture of an articulated object, and a method of estimating the posture of the articulated object are provided. A parameter used to estimate a posture of an articulated object may be iteratively learned based on a depth feature corresponding to an iteration count, and the posture of the articulated object may be estimated based on the learned parameter.
Abstract:
A three-dimensional (3D) display device for displaying a 3D image using at least one of a gaze direction of a user and a gravity direction includes a gaze direction measuring unit to measure the gaze direction, a data obtaining unit to obtain 3D image data for the 3D image, a viewpoint information obtaining unit to obtain information relating to a viewpoint of the 3D image, a data transform unit to transform the 3D image data, based on the gaze direction and the information relating to the viewpoint of the 3D image, and a display unit to display the 3D image, based on the transformed 3D image data.
Abstract:
A user recognition method includes extracting a user feature of a current user from input data, estimating an identifier of the current user based on the extracted user feature, and generating the identifier of the current user in response to an absence of an identifier corresponding to the current user and controlling an updating of user data based on the generated identifier and the extracted user feature.
Abstract:
A multi-touch sensing apparatus using a rear view camera of an array type is provided. The multi-touch sensing apparatus may include a display panel to display an image, a sensing light source to emit light to sense a touch image which is generated by an object and displayed on a back side of the display panel, and a camera to divide and sense the touch image. The camera may be arranged in an edge of a lower side of the multi-touch sensing apparatus, or a mirror to reflect the touch image may be included in the multi-touch sensing apparatus.
Abstract:
Example embodiments disclose a method of generating a feature vector, a method of generating a histogram, a learning unit classifier, a recognition apparatus, and a detection apparatus, in which a feature point is detected from an input image based on a dominant direction analysis of a gradient distribution, and a feature vector corresponding to the detected feature point is generated.
Abstract:
An apparatus for detecting an interfacing region in a depth image detects the interfacing region based on a depth of a first region and a depth of a second region which is an external region of the first region in a depth image.
Abstract:
An apparatus and method for analyzing body part association. The apparatus and method may recognize at least one body part from a user image extracted from an observed image, select at least one candidate body part based on association of the at least one body part, and output a user pose skeleton related to the user image based on the selected at least one candidate body part.
Abstract:
Provided is a liveness verification method and device. A liveness verification device acquires a first image and a second image, and select one or more liveness models based on respective analyses of the first image and the second image, including analyses based on an object part being detected in the first image and/or the second image, and to verify, using the selected one or more liveness models, a liveness of the object based on the first image and/or the second image. The first image may be a color image and the second image may be an Infrared image.