Abstract:
A robot and a method for operating the same according to one aspect of the present disclosure can provide emotion based services by acquiring data related to a user and recognizing emotional information on the basis of the data related to the user, and automatically generate a character expressing an emotion of the user by generating an avatar by mapping the recognized emotional information of the user to face information of the user.
Abstract:
A robot includes a sensing unit including at least one sensor for detecting a user, a face detector configured to acquire an image including a face of the user detected by the sensing unit, a controller configured to detect an interaction intention of the user from the acquired image, and an output unit including at least one of a speaker or a display for outputting at least one of sound or a screen for inducing interaction of the user, when the interaction intention is detected.
Abstract:
A method for controlling volume in an apparatus having a speaker and a microphone includes receiving, at the microphone, external noise and speech of a user, and calculating sound pressure of the noise received by the microphone. The method further includes performing exception processing of the sound pressure of some or all of the noise using the calculated sound pressure and one of a speech utterance state, a speech receiving state, or a temporal length state, of the noise, mapping volume of the speech in response to the sound pressure of the external noise, synthesizing speech guidance into a sound file, outputting the sound file, via the speaker, according to the mapped volume.
Abstract:
An image photographing device for photographing images through a front camera and a rear camera, according to one embodiment of the present invention, comprises: a display unit; a feature extraction unit for extracting facial features from an image of a user's face which is displayed on a preview screen through the rear camera; a structure extraction unit for extracting the structure of the user's face by using the extracted facial features; an expression extraction unit for extracting the facial expression of the user by using the extracted facial features if the extracted structure of the face matches a standard facial structure; and an notification unit for outputting a photograph notification signal if the extracted facial expression of the user matches a standard facial expression.