Abstract:
A head-mounted device (HMD) for enabling a 3D drawing interaction in a mixed-reality space is provided. The HMD includes a frame section, a rendering unit providing a specified image, a camera unit attached to the frame section to pick up an image for rendering, and a control unit configured to, when the camera unit picks up an image of a specified marker, perform a calibration process based on position information of the image of the marker displayed on a screen of the HMD and to, when there is a motion of an input device for interaction with a virtual whiteboard, obtain position information of an image of the input device displayed on a virtual camera screen based on position information of the whiteboard.
Abstract:
A method for performing occlusion queries is disclosed. The method includes steps of: (a) a graphics processing unit (GPU) using a first depth buffer of a first frame to thereby predict a second depth buffer of a second frame; and (b) the GPU performing occlusion queries for the second frame by using the predicted second depth buffer, wherein the first frame is a frame predating the second frame. In accordance with the present invention, a configuration for classifying the objects into the occluders and the occludees is not required and the occlusion queries for the predicted second frame are acquired in advance at the last of the first frame or the first of the second frame.
Abstract:
A method for displaying a shadow of a 3D virtual object, includes steps of: (a) acquiring information on a viewpoint of a user looking at a 3D virtual object displayed in a specific location in 3D space by a wall display device; (b) determining a location and a shape of a shadow of the 3D virtual object to be displayed by referring to information on the viewpoint of the user and the information on a shape of the 3D virtual object; and (c) allowing the shadow of the 3D virtual object to be displayed by at least one of the wall display device and a floor display device by referring to the determined location and the determined shape of the shadow of the 3D virtual object. Accordingly, the user is allowed to feel the accurate sense of depth or distance regarding the 3D virtual object.
Abstract:
A method makes a first and a second devices support for interactions with respect to a 3D object. The method includes steps of: (a) allowing the first device to acquire information on physical 3D object and information on images of a user; (b) allowing the second device to receive the information relating to the physical 3D object and the information on images of the user of the first device, then display virtual 3D object corresponding to the physical 3D object and display 3D avatar of the user of the first device; (c) allowing the first device to transmit information on manipulation of the user of the first device regarding the physical 3D object and information on images of the user of the first device who is manipulating the physical 3D object and then allowing the second device to display the 3D avatar of the user of the first device.
Abstract:
The present invention provides a method for planning a path for an autonomous walking humanoid robot that takes an autonomous walking step using environment map information, the method comprising: an initialization step of initializing path input information of the autonomous walking humanoid robot using origin information, destination information, and the environment map information; an input information conversion step of forming a virtual robot including information on the virtual robot obtained by considering the radius and the radius of gyration of the autonomous walking humanoid robot based on the initialized path input information; a path generation step of generating a path of the virtual robot using the virtual robot information, the origin information S, the destination information G, and the environment map information; and an output information conversion step of converting the path of the autonomous walking humanoid robot based on the virtual robot path generated in the path generation step.
Abstract:
Provided are a user interface device and a control method thereof for supporting easy and accurate selection of overlapped objects. The user interface device is a device for providing a user interface applied to a three-dimensional (3D) virtual space in which a plurality of virtual objects is created, and includes a gaze sensor unit to sense a user's gaze, an interaction sensor unit to sense the user's body motion for interaction with the virtual object in the 3D virtual space, a display unit to display the 3D virtual space, and a control unit to, when the user's gaze overlaps at least two virtual objects, generate projection objects corresponding to the overlapped virtual objects, wherein when an interaction between the projection object and the user is sensed, the control unit processes the interaction as an interaction between the virtual object corresponding to the projection object and the user.
Abstract:
Disclosed is an actuator generating haptic sensations, the actuator having a spherical rotor driven by a magnetic force vector created around the same, a stator having a space corresponding in shape to the spherical rotor defined therein to allow the spherical rotor to be positioned in the space and having a portion of an upper part of the spherical rotor exposed, at least three rotation-driving coils formed in the stator at a given distance from each other to provide the magnetic force vector to the spherical rotor, and a driving unit independently controlling electric current supplied to each of the rotation-driving coils to create the magnetic force vector.
Abstract:
Provided is a three-dimensional magnetic sensor based finger motion capture interface device, including a back-of-hand fixing member; a finger wearing member; at least one link member which is disposed between the back-of-hand fixing member and the finger wearing member and includes at least one magnetic sensor; at least one fixing member which connects between a plurality of link members; and a controller which receives sensor coordinate values corresponding to the change of a magnetic line of force sensed by the at least one magnetic sensor, extracts pitch and yaw motions of each link member based on the received sensor coordinate values, and calculates a user's finger position based on the extracted pitch and yaw motion values.
Abstract:
Provided is a force conveyance system that is configured to have 6 degrees of freedom, thereby allowing freedom of movement such as opening/closing of a hand and adduction/abduction of a finger and reflecting a desired force to a fingertip without obstructing the movement of a finger. Also, the force conveyance system may estimate a fingertip position and a finger joint angle, measure a finger movement, and convey a more accurate force accordingly.
Abstract:
Disclosed is an actuator including a support member, an actuating unit rotatably installed in the support member and having a first electrode installed on one side and a stimulation providing unit installed on the other side to provide stimulation by rotation, and an attraction force providing unit having a second electrode to provide an attraction force to the first electrode, wherein when an electrostatic attraction force is provided to the first electrode through the second electrode, the actuating unit pivots to enable the stimulation providing unit to apply stimulation to a sensing unit.