Abstract:
Provided is a method of operating a rendering server. The method includes identifying at least one object from image information, generating a rendered image for the at least one object, transmitting the rendered image to be displayed on a user terminal, extracting at least one piece of event information corresponding to a specific time period from the image information, and providing the at least one piece of event information to a host server to be synchronized according to the specific time period.
Abstract:
Provided are an intuitive interaction apparatus and method. The intuitive interaction apparatus includes a detector configured to detect three-dimensional (3D) information of an object of interest (OOI), including a body part of a first object and an object close to the body part, from a 3D image frame of the first object in an eye-gaze range of the first object and a restorer configured to combine pieces of the 3D information of the OOI detected by the detector and three-dimensionally model the OOI to generate a 3D model which is to be displayed in virtual reality.
Abstract:
Provided are an apparatus and method for tracking a camera that reconstructs a real environment in three dimensions by using reconstruction segments and a volumetric surface. The camera tracking apparatus using reconstruction segments and a volumetric surface includes a reconstruction segment division unit configured to divide three-dimensional space reconstruction segments extracted from an image acquired by a camera, a transformation matrix generation unit configured to generate a transformation matrix for at least one reconstruction segment among the reconstruction segments obtained by the reconstruction segment division unit, and a reconstruction segment connection unit configured to rotate or move the at least one reconstruction segment according to the transformation matrix generated by the reconstruction segment division unit and connect the rotated and moved reconstruction segment with another reconstruction segment.
Abstract:
An apparatus and method for providing projection mapping-based augmented reality (AR). According to an exemplary embodiment, the apparatus includes an input to acquire real space information and user information; and a processor to recognize a real environment by using the acquired real space information and the acquired user information, map the recognized real environment to a virtual environment, generate augmented content that changes corresponding to a change in space or a user's movement, and project and visualize the generated augmented content through a projector.
Abstract:
Provided are an SNS-based content creating system and method. The SNS-based content creating system includes a terminal configured to share a content creation information or a content correction information with the other terminal through the SNS and a visualization object managing server configured to manage the database, wherein the terminal comprises a user input unit configured to receive text information, a text processor configured to recommend a visualization candidate material for the text information by using a database, a visualization processor configured to create or correct content, based on selection information about the recommended visualization candidate material and a network processor configured to transmit the content to other terminal.
Abstract:
A VR motion simulator allowing a user to experience a vertical motion in VR environment by analyzing a pressure distribution of a sole of the user and estimating a posture of the user, and a method of implementing a vertical motion action of a user in VR. The simulator includes a pressure distribution image generating module generating a pressure distribution image of a sole of a user at a time of a vertical motion of the user; a sole position tracking module analyzing the pressure distribution image to detect the sole of the user, and track a position of the sole on the basis of movement of the detected sole to output sole position tracking information; and a posture estimating module estimating a posture of the user on the basis of the sole position tracking information to output posture estimation information.
Abstract:
The present invention relates to a method and an apparatus for matching a virtual object in a virtual environment, the method including generating a point cloud of a non-rigid object, matching a low resolution virtual model to the point cloud, and implementing a high resolution model using the matched model.
Abstract:
Provided are an apparatus and method for detecting a plurality of arms and hands by using a three-dimensional (3D) image. The apparatus includes an image input unit configured to acquire a 3D image of an object, an arm detecting unit configured to detect one or more component-unit candidate regions of the object in the 3D image, and detect one or more arm regions by using arm detection feature information, extracted from each of the candidate regions, and a pattern recognition algorithm, and a hand detecting unit configured to calculate a position of a hand and a position of a wrist in each of the arm regions detected by the arm detecting unit, and detect a hand region by using the position of the hand and the position of the wrist.
Abstract:
An apparatus for 3D reconstruction based on multiple GPUs and a method thereof are disclosed. The 3D reconstruction apparatus according to the present invention includes a 3D reconstruction apparatus, comprising: a camera configured to generate depth data for 3D space; a first GPU configured to update first TSDF volume data with first depth data generated for a first area and predict a surface point of an object which is present in the space from the first updated TSDF volume data; a second GPU configured to update second TSDF volume data with second depth data generated for a second area and predict a surface point of an object which is present in the space from the second updated TSDF volume data; and a master GPU configured to combine a surface point predicted from the first TSDF volume data and a surface point estimated from the second TSDF volume data.
Abstract:
A bidirectional display device may include: a first transparent display panel outputting a first image in a first direction and transmitting light reflected in a second direction that is an opposite direction of the first direction; a first transparent light panel disposed behind the first transparent display panel and providing light to the first transparent display panel; a transmittance control panel disposed behind the first transparent light panel; a second transparent display panel outputting a second image in the second direction and transmitting light reflected in the first direction; a second transparent light panel disposed between the transmittance control panel and the second transparent display panel and providing light to the second transparent display panel; a transmittance controller and controlling transmittance of at least one object included in the first image or the second image; and an image output controller controlling output of the first image and the second image.