Abstract:
The present invention relates to a telepresence device that is capable of enabling various types of verbal or non-verbal interaction between a remote user and a local user. In accordance with an embodiment, the telepresence device may include a camera combined with direction control means; a projector provided on a top of the camera, and configured such that a direction of the projector is controlled by the direction control means along with a direction of the camera; and a control unit configured to control the direction of the camera by operating the direction control means, to extract an object, at which a remote user gazes, from an image acquired by the camera whose direction has been controlled, to generate a projection image related to the object, and to project the projection image around the object by controlling the projector.
Abstract:
The present invention relates to an auxiliary registration apparatus for registering a display device and an image sensor. The apparatus includes a camera; a panel interoperated with the camera and on which a first pattern is displayed; and a control part which allows the first pattern to be shot with the image sensor and a second pattern displayed on a screen of the display device to be shot with the camera; wherein the control part allows information on a transformation relationship between a coordinate system of the display device and that of the image sensor to be acquired by referring to information on a transformation relationship between a coordinate system of the panel and that of the image sensor and information on a transformation relationship between a coordinate system of the camera and that of the display device.
Abstract:
A method for displaying a shadow of a 3D virtual object, includes steps of: (a) acquiring information on a viewpoint of a user looking at a 3D virtual object displayed in a specific location in 3D space by a wall display device; (b) determining a location and a shape of a shadow of the 3D virtual object to be displayed by referring to information on the viewpoint of the user and the information on a shape of the 3D virtual object; and (c) allowing the shadow of the 3D virtual object to be displayed by at least one of the wall display device and a floor display device by referring to the determined location and the determined shape of the shadow of the 3D virtual object. Accordingly, the user is allowed to feel the accurate sense of depth or distance regarding the 3D virtual object.
Abstract:
Provided is a tactile feedback device including a tactile transmission element having an enclosed space inside, the tactile transmission element including a compression part which is compressed toward the enclosed space by an electrostatic force generated by the application of voltage, and a tactile part which transmits tactile sensation to a user by expansion with movement of air by the compression.
Abstract:
Provided is a tactile transmission device, which includes a base unit forming one surface of the tactile transmission device, a tip-tilt elastic member stacked on the base unit and configured to transmit a tactile feel to a finger of a user in a first direction oriented upward from a bottom surface of the finger and a second direction intersecting the first direction at a predetermined angle, and a cover disposed at an upper side of the tip-tilt elastic member to form another surface of the tactile transmission device.
Abstract:
Provided are a user interface device and a control method thereof for supporting easy and accurate selection of overlapped objects. The user interface device is a device for providing a user interface applied to a three-dimensional (3D) virtual space in which a plurality of virtual objects is created, and includes a gaze sensor unit to sense a user's gaze, an interaction sensor unit to sense the user's body motion for interaction with the virtual object in the 3D virtual space, a display unit to display the 3D virtual space, and a control unit to, when the user's gaze overlaps at least two virtual objects, generate projection objects corresponding to the overlapped virtual objects, wherein when an interaction between the projection object and the user is sensed, the control unit processes the interaction as an interaction between the virtual object corresponding to the projection object and the user.
Abstract:
A motion capture system includes a motion sensor having a flexible body and a fiber bragg gratings (FBG) sensor inserted into the body, a fixture configured to fix the motion sensor to a human body of a user, a light source configured to irradiate light to the motion sensor, and a measurer configured to analyze a reflected light output from the motion sensor, wherein the FBG sensor includes an optical fiber extending along a longitudinal direction of the body and a sensing unit formed in a partial region of the optical fiber and having a plurality of gratings, and wherein a change of a wavelength spectrum of the reflected light, caused by the change of an interval of the gratings due to a motion of the user, is detected to measure a motion state of the user.
Abstract:
A method of controlling a virtual model to perform physics simulation to the virtual model in a virtual space includes: generating a first virtual model having a first object physics field which is a range with respect to a first field parameter; generating a second virtual model having a second object physics field which is a range with respect to a second field parameter; when the field parameters are capable of corresponding to each other, checking whether there is a portion where the object physics fields correspond to each other; and when there is a portion where the object physics fields correspond to each other, generating an interaction of the virtual models.
Abstract:
The present invention relates to a data processing device, method, and computer program for data sharing among multiple users. The device includes a sensor module collecting data by using at least one of a camera sensor, a distance sensor, a microphone array, a motion capture sensor, an environment scanner, and a haptic device; a memory module storing and controlling the data collected by the sensor module; and a network module transmitting the data stored in the memory module to a remote location, or receiving predetermined data from the remote location, wherein the memory module stores the stored data and the predetermined data as data in a standardized format according to data features.
Abstract:
Provided are a method and system for noncontact vision-based 3D cognitive fatigue measuring. The method comprises: acquiring pupil images of a subject exposed to visual stimuli; extracting a task evoked pupillary response (TEPR) by using the pupil images; detecting dominant peaks from the TEPR; calculating latency of dominant peaks; and determining cognitive fatigue of the subject by comparing a value of the latency to a predetermined reference value.