Abstract:
An example user terminal apparatus includes communication circuitry configured to be connected to a home network comprising a plurality of devices; a display configured to display a UI screen for managing the home network; a sensor configured to sense a user manipulation of the UI screen; and processing circuitry configured to change the UI screen displayed on the display according to the user manipulation. The UI screen is one of a plurality of service pages that are changeable according to a user manipulation in a first direction, the plurality of service pages being pages for respectively providing different home network management services. At least one of the plurality of service pages comprises an area that is displayable on the display according to a user manipulation in a second direction.
Abstract:
Provided is a device for detecting stress of a user based on a bio-signal of the user, and when the stress is detected, outputting information of a peripheral device.
Abstract:
A method, performed by a device, of providing security content includes receiving a touch and drag input indicating that a user drags a visual representation of a first application displayed on a touch screen of the device to a fingerprint recognition area while the user touches the visual representation of the first application with a finger; performing authentication on a fingerprint of the finger detected on the touch screen using a fingerprint sensor included in the fingerprint recognition area; and when the performing authentication on the fingerprint is successful, displaying the security content associated with the first application on an execution window of the first application.
Abstract:
Provided is a method of recognizing text in a terminal, the method including generating first tag information about a kind of language set in a user interface (UI) for inputting text and a location of a cursor at a time point when a text input has started, when the UI for inputting text displayed on the terminal is executed; when a language switch request that requests the terminal to switch the kind of language set in the UI is received, generating second tag information about a kind of switched language and a location of the cursor at a time point of receiving the language switch request; when the text input is finished, storing a screen image of the terminal; and recognizing the text input to the terminal based on at least one piece of tag information and the screen image.
Abstract:
A sound outputting apparatus which includes a communicator communicate with an electronic apparatus and receive first audio data, an output module to output the first audio data received, and to output second audio data, which is modified data of the first audio data, a sensor to detect brainwave data of a user, and a processor to control so that the brainwave data of the user detected through the sensor is transmitted to the electronic apparatus through the communicator, and so that the second audio data is received from the electronic apparatus.
Abstract:
A speech recognition method and apparatus for performing speech recognition in response to an activation word determined based on a situation are provided. The speech recognition method and apparatus include an artificial intelligence (AI) system and its application, which simulates functions such as recognition and judgment of a human brain using a machine learning algorithm such as deep learning.
Abstract:
A voice converting apparatus and a voice converting method are provided. The method of converting a voice using a voice converting apparatus including receiving a voice from a counterpart, analyzing the voice and determining whether the voice abnormal, converting the voice into a normal voice by adjusting a harmonic signal of the voice in response to determining that the voice is abnormal, and transmitting the normal voice.
Abstract:
A device comprises a viewpoint sensor for sensing a user's viewpoint; a rendering viewpoint determination unit for determining a rendering viewpoint according to the sensed user's viewpoint; a rendering performing unit for rendering a three-dimensional (3D) graphical user interface (GUI) screen according to the determined rendering viewpoint; a display unit for displaying the rendered 3D GUI screen; and a controller, wherein when the user's viewpoint changes, at least one new object is additionally displayed on the 3D GUI screen.
Abstract:
A speech recognition method and apparatus for performing speech recognition in response to an activation word determined based on a situation are provided. The speech recognition method and apparatus include an artificial intelligence (AI) system and its application, which simulates functions such as recognition and judgment of a human brain using a machine learning algorithm such as deep learning.
Abstract:
A portable apparatus and a method of changing a content screen of the portable apparatus are provided. The portable apparatus includes changing a displayed content in response to an increase in a visual fatigue and a method of changing a content screen of the portable apparatus. Some of disclosed various embodiments provide a portable apparatus that calculates a visual fatigue by using user electroencephalogram (EEG) information received from a wearable apparatus and changing a displayed content into another content in response to an increase in the calculated visual fatigue, and a method of changing a content screen of the portable apparatus.