Abstract:
Disclosed herein is an apparatus and method for detecting an emotional change through facial expression analysis. The apparatus for detecting an emotional change through facial expression analysis includes a memory having at least one program recorded thereon, and a processor configured to execute the program, wherein the program includes a camera image acquisition unit configured to acquire a moving image including at least one person, a preprocessing unit configured to extract a face image of a user from the moving image and preprocess the extracted face image, a facial expression analysis unit configured to extract a facial expression vector from the face image of the user and cumulatively store the facial expression vector, and an emotional change analysis unit configured to detect a temporal location of a sudden emotional change by analyzing an emotion signal extracted based on cumulatively stored facial expression vector values.
Abstract:
An apparatus and method for reconstructing an experience item in 3D. The apparatus for reconstructing an experience item in 3D includes a 3D data generation unit for generating 3D data by reconstructing the 3D shape of a target object to be reconstructed in 3D, a 2D data generation unit for generating 2D data by performing 2D parameterization on the 3D data, an attribute setting unit for assigning attribute information corresponding to the target object to the 3D data, an editing unit for receiving editing of the 2D data from a user, and an experience item generation unit for generating an experience item corresponding to the target object using the 3D data corresponding to the edited 2D data and the attribute information.
Abstract:
Disclosed herein are an apparatus for recognizing a user command using non-contact gaze-based head motion information and a method using the same. The method includes monitoring the gaze and the head motion of a user based on a sensor, displaying a user interface at a location corresponding to the gaze based on gaze-based head motion information acquired by combining the gaze and the head motion, and recognizing a user command selected from the user interface.
Abstract:
Disclosed herein are an apparatus and method for generating a 3D avatar. The method, performed by the apparatus, includes performing a 3D scan of the body of a user using an image sensor and generating a 3D scan model using the result of the 3D scan of the body of the user, matching the 3D scan model and a previously stored template avatar, and generating a 3D avatar based on the result of matching the 3D scan model and the template avatar.
Abstract:
An apparatus and method for providing augmented reality-based realistic experience. The apparatus for providing augmented reality-based realistic experience includes a hardware unit and a software processing unit. The hardware unit includes a mirror configured to have a reflective characteristic and a transmissive characteristic, a display panel configured to present an image of an augmented reality entity, and a sensor configured to acquire information about a user. The software processing unit presents the augmented reality entity via the display panel based on the information about the user from the hardware unit after performing color compensation on the color of the augmented reality entity.
Abstract:
Disclosed herein is a method for supporting an attention test based on an attention map and an attention movement map. The method includes generating a score distribution for each segment area of frames satisfying preset conditions, among frames of video content (video) that is produced in advance so as to be suitable for the purpose of a test, generating an attention map corresponding to the frames based on the distribution of the gaze point of a subject, generating an attention movement map corresponding to the frames based on information about movement of the gaze point of the subject, and calculating the attention of the subject using the score distribution for each segment area, the attention map, and the attention movement map.
Abstract:
Disclosed herein is an apparatus and method for providing additional information to an FU in a reconfigurable codec. The apparatus includes a syntax parser, a prediction mode converter unit, and an inverse prediction unit. The syntax parser parses the encoding type value of a multimedia bit stream and a first prediction mode value from the multimedia bit stream. The prediction mode converter unit converts the first prediction mode value into a second prediction mode value corresponding to the encoding type value. The inverse prediction unit determines an inverse prediction operating mode based on the second prediction mode value.
Abstract:
Disclosed herein are an apparatus and method for monitoring a user based on multi-view face images. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may include a face detection unit for extracting face area images from respective user images captured from two or more different viewpoints, a down-conversion unit for generating at least one attribute-specific 2D image by mapping information about at least one attribute in the 3D space of the face area images onto a 2D UV space, and an analysis unit for generating user monitoring information by analyzing the at least one attribute-specific 2D image.
Abstract:
Disclosed herein are a method and apparatus for active identification based on gaze path analysis. The method may include extracting the face image of a user, extracting the gaze path of the user based on the face image, verifying the identity of the user based on the gaze path, and determining whether the face image is authentic.
Abstract:
Method and apparatus for augmented-reality rendering on a mirror display based on motion of an augmented-reality target. The apparatus includes an image acquisition unit for acquiring a sensor image corresponding to at least one of a user and an augmented-reality target, a user viewpoint perception unit for acquiring coordinates of eyes of the user using the sensor image, an augmented-reality target recognition unit for recognizing an augmented-reality target, to which augmented reality is to be applied, a motion analysis unit for calculating a speed of motion corresponding to the augmented-reality target based on multiple frames, and a rendering unit for performing rendering by adjusting a transparency of virtual content to be applied to the augmented-reality target according to the speed of motion and by determining a position where the virtual content is to be rendered, based on the coordinates of the eyes.