Abstract:
A method for displaying a shadow of a 3D virtual object, includes steps of: (a) acquiring information on a viewpoint of a user looking at a 3D virtual object displayed in a specific location in 3D space by a wall display device; (b) determining a location and a shape of a shadow of the 3D virtual object to be displayed by referring to information on the viewpoint of the user and the information on a shape of the 3D virtual object; and (c) allowing the shadow of the 3D virtual object to be displayed by at least one of the wall display device and a floor display device by referring to the determined location and the determined shape of the shadow of the 3D virtual object. Accordingly, the user is allowed to feel the accurate sense of depth or distance regarding the 3D virtual object.
Abstract:
A method makes a first and a second devices support for interactions with respect to a 3D object. The method includes steps of: (a) allowing the first device to acquire information on physical 3D object and information on images of a user; (b) allowing the second device to receive the information relating to the physical 3D object and the information on images of the user of the first device, then display virtual 3D object corresponding to the physical 3D object and display 3D avatar of the user of the first device; (c) allowing the first device to transmit information on manipulation of the user of the first device regarding the physical 3D object and information on images of the user of the first device who is manipulating the physical 3D object and then allowing the second device to display the 3D avatar of the user of the first device.
Abstract:
The present invention provides a method for planning a path for an autonomous walking humanoid robot that takes an autonomous walking step using environment map information, the method comprising: an initialization step of initializing path input information of the autonomous walking humanoid robot using origin information, destination information, and the environment map information; an input information conversion step of forming a virtual robot including information on the virtual robot obtained by considering the radius and the radius of gyration of the autonomous walking humanoid robot based on the initialized path input information; a path generation step of generating a path of the virtual robot using the virtual robot information, the origin information S, the destination information G, and the environment map information; and an output information conversion step of converting the path of the autonomous walking humanoid robot based on the virtual robot path generated in the path generation step.
Abstract:
Embodiments relate to a torsion sensor device which measures a degree of torsion of a measurement object by using a fiber Bragg gratings (FBG) sensor, the sensor device comprising: an FBG sensor including a sensing unit formed in one section of an elongated optical fiber; and a fixing device for fixing and supporting the FBG sensor to cause displacement of the FBG sensor according to motion of the measurement object, wherein the fixing device includes a bending prevention member to enable the sensing unit to have torsion displacement without bending displacement, according to the motion of the measurement object.
Abstract:
A local watch skew compensation device of the present invention is a client device which is synchronized with the other client device to provide a time-aware service including: a local time providing unit which supplies first local time data and second local time data in accordance with a local clock; a media scheduling unit which receives first media data and second media data from the other client device, schedules first playout time of the first media data using the first local time data, and schedules second playout time of the second media data using the second local time data; and a skew monitoring unit which requests global time data to a global time server when a difference between the first playout time and the second playout time exceeds a skew threshold value, and the first media data and the second media data are different types of media data.
Abstract:
Disclosed are a method and apparatus for manipulating an object in virtual or augmented reality based on a hand motion capture apparatus providing haptic feedback. The method of manipulating an object in virtual or augmented reality based on a hand motion capture apparatus providing haptic feedback includes receiving a value of a sensor at a specific position in a finger from the hand motion capture apparatus, estimating a motion of the finger based on the value of the sensor and adjusting a motion of a virtual hand, detecting contact of the adjusted virtual hand with a virtual object, and upon detecting the contact with the virtual object, providing feedback to the user using the hand motion capture apparatus, wherein the virtual hand is modeled for each user.
Abstract:
Provided are a method and system for detecting information of brain-heart connectivity, the method comprising: obtaining moving images of a pupil and an electrocardiogram (ECG) signal from a subject; acquiring a pupil size variation (PSV) from the moving images by separating the moving images at a predetermined time range after R-peak of the ECG signal; extracting signals of a first period and a second period from the PSV; calculating alpha powers of the signals of the first and second periods at predetermined frequencies respectively.
Abstract:
A three-dimensional information augmented video see-through display device according to an exemplary embodiment of the present disclosure includes a camera interface module which obtains at least two real images from at least two camera modules, a rectification module which performs rectification on the at least two real images, a lens distortion correction module which corrects at least two composite images obtained by combining a virtual image to the at least two real images, based on a lens distortion compensation value indicating a value for compensating for a distortion of a wide angle lens for the at least two real images, and an image generation module which performs side-by-side image processing on the at least two composite images to generate a three-dimensional image for virtual reality VR or augmented reality AR.
Abstract:
A motion capture system includes a motion sensor having a flexible body and a fiber bragg gratings (FBG) sensor inserted into the body, a fixture configured to fix the motion sensor to a human body of a user, a light source configured to irradiate light to the motion sensor, and a measurer configured to analyze a reflected light output from the motion sensor, wherein the FBG sensor includes an optical fiber extending along a longitudinal direction of the body and a sensing unit formed in a partial region of the optical fiber and having a plurality of gratings, and wherein a change of a wavelength spectrum of the reflected light, caused by the change of an interval of the gratings due to a motion of the user, is detected to measure a motion state of the user.
Abstract:
A bio-stimulation robot includes a stationary platform, a plurality of drive modules coupled to the stationary platform, and a motion platform coupled to the drive modules to operate to change a position of the motion platform. Each of the drive modules includes a first guide member having an arc shape, a motion member coupled to the first guide, and a leg member having a first end coupled to the motion member and a second end fixed to the motion platform. The motion member slides along the first guide member. The second end of the leg member is rotatably connected to the motion platform. The second end of the leg member is rotatably connected to the motion platform.