Abstract:
A method and apparatus for motion capture interface using multiple fingers are disclosed. The method includes measuring a position of an end of a middle finger of an actual hand in a state in which the actual hand is spread, deriving a starting reference position of the middle finger of the actual hand, and calculating a length of the middle finger of the actual hand. The method further includes recognizing a relationship between starting reference positions of a thumb, an index finger, a middle finger, and a wrist based on using a virtual hand reference model that models a virtual hand to be controlled.
Abstract:
A mixed reality display device according to one embodiment of the present invention comprises: a virtual environment rendering unit for generating a virtual object by using information on a scene in a virtual reality, and then generating a color map and a depth map for the virtual object; a depth rendering unit for generating a depth map for a real object by using information on a real environment; an occlusion processing unit for performing occlusion processing by using the color map and the depth map, for the virtual object, received from the virtual environment rendering unit, the depth map, for a real object, received from the depth rendering unit, and a color map, for the real object, received from a see-through camera; and a display unit for outputting a color image by using a color map for the virtual object and a color map for the real object, which are received from the occlusion processing unit.
Abstract:
Provided is an in vivo bioimaging method including irradiating near-infrared (NIR) light onto a living body, converting the NIR light passed through the living body, into visible light using upconversion nanoparticles (UCNPs), and generating a bioimage of the living body by receiving the visible light using a complementary metal-oxide-semiconductor (CMOS) image sensor.
Abstract:
Disclosed is an apparatus for outputting a virtual keyboard, the apparatus including: a virtual keyboard image output unit determining coordinates of a virtual keyboard image by using hand information of a user and outputting the virtual keyboard image; a contact recognition unit determining a contact state by using collision information between a virtual physical collider associated with an end point of a user's finger and a virtual physical collider associated with each virtual key of the virtual keyboard image; a keyboard input unit providing multiple input values for a single virtual key; and a feedback output unit outputting respective feedback for the multiple input values. Accordingly, input convenience and efficiency may be provided by outputting the virtual keyboard in a three dimensional virtual space and reproducing an input method using a keyboard form that is similar to the real world.
Abstract:
A method for providing telepresence by employing avatars is provided. The method includes steps of: (a) a corresponding location searching part determining a location in a first space where an avatar Y′ corresponding to a human Y in a second space will be placed, if a change of a location of the human Y in the second space is detected from an initial state, by referring to (i) information on the first space and the second space and (ii) information on locations of the humans X and Y, and the avatar X′ in the first and the second spaces; and (b) an avatar motion creating part creating a motion of the avatar Y′ by referring to information on the determined location where the avatar Y′ will be placed.
Abstract:
An apparatus for generating haptic feedback, includes: multiple haptic units placed on a first portion of a body; and a control unit placed on a second portion, near the first portion, of the body, wherein the control unit includes: a first module for acquiring information on relative position (i) among the respective multiple haptic units and (ii) between the respective haptic units and the control unit, a second module for acquiring information on absolute position of the control unit by measuring a position of the control unit in reference to an external reference point, and a haptic command module for creating a command signal by referring to at least one piece of the information on relative position acquired by the first module and the information on absolute position acquired by the second module and delivering the created command signal to a corresponding haptic unit among all the multiple haptic units.
Abstract:
Provided is a hand-held type user interface device for providing force feedback to a user according to an interaction with a virtual object or a user at a remote place, which remarkably increases the sense of reality in the interaction with the virtual object without limiting an action of the user by sensing the three-dimensional (3D) position and direction of the device and diversifying a force feedback provided to the user according to an action of the user holding the device.
Abstract:
The present invention relates to an apparatus for creating a tactile sensation through non-invasive brain stimulation by using ultrasonic waves. The apparatus includes: an ultrasonic transducer module for inputting the ultrasonic waves to stimulate a specific part of the brain of a specified user non-invasively through at least one ultrasonic transducer unit; a compensating module for acquiring information on a range of tactile perception areas in the brain of the specified user and compensating properties of ultrasonic waves to be inputted to the specified user through the ultrasonic transducer unit by referring to the acquired information thereon; and an ultrasonic waves generating module for generating ultrasonic waves to be inputted to the specified user through the ultrasonic transducer unit by referring to a compensating value decided by the compensating module.
Abstract:
An apparatus interacting with an external device by using a pedal module is provided. The apparatus includes: a pedal module; a parallel position-measuring sensor for sensing a degree of a parallel motion; a rotary position-measuring sensor for sensing a degree of a rotary motion; and a control part for ordering the external device to be driven by referring to at least either of the degree of the parallel motion sensed by the parallel position-measuring sensor or that of the rotary motion sensed by the rotary position-measuring sensor or for receiving a control signal from the external device and driving a motor group including at least one motor to apply force feedback to the pedal module by referring to the control signal.
Abstract:
A method for displaying a shadow of a 3D virtual object, includes steps of: (a) acquiring information on a viewpoint of a user looking at a 3D virtual object displayed in a specific location in 3D space by a wall display device; (b) determining a location and a shape of a shadow of the 3D virtual object to be displayed by referring to information on the viewpoint of the user and the information on a shape of the 3D virtual object; and (c) allowing the shadow of the 3D virtual object to be displayed by at least one of the wall display device and a floor display device by referring to the determined location and the determined shape of the shadow of the 3D virtual object. Accordingly, the user is allowed to feel the accurate sense of depth or distance regarding the 3D virtual object.