Abstract:
Provided is an electronic device that includes a first processor for receiving an audio signal, performing first voice recognition on the audio signal, and transferring a driving signal to a second processor based on a result of the first voice recognition. The second processor performs second voice recognition based on a voice signal by the first voice recognition or the audio signal, in response to the driving signal.
Abstract:
An electronic device, a keyword management device and a method are disclosed. The electronic device includes a processor that receives a keyword and a keyword search condition, transmits the keyword and the keyword search condition to a keyword management device to perform a search based on at least the keyword search condition, receives a search result from the keyword management device, and displays the search result and combines the search result, the keyword and the keyword search condition for subsequent retrieval. The keyword management device includes a processor that receives a keyword and keyword search condition from the electronic device, executes a search on at least one of a plurality of information providing servers utilizing the keyword and the keyword search condition, acquires the search result and combines the search result, the keyword and the keyword search condition for subsequent retrieval, and transmits the search result to the first electronic device.
Abstract:
A method and an apparatus for displaying a user interface of schedule registration in an electronic device are provided. The method includes receiving a user conversation including schedule related contents in one of a voice and text form, displaying a schedule message generated based on the user conversation on a screen of the electronic device, and displaying a selectable reserved schedule other than the displayed schedule message if a user's modification request is received for the schedule message. According to the present disclosure, a user convenience is improved and the user may intuitively identify a schedule registration method because a schedule may be registered without a direct input of the user by analyzing a natural language of user conversation in a character or voice form.
Abstract:
According to certain embodiments, an electronic device comprises a first housing; a second housing; a hinge disposed between the first housing and the second housing such that the second housing is foldable at one end of the first housing; and a flexible display disposed on a surface of the first housing and a surface of the second housing, wherein the flexible display comprises a display panel, and a glass layer disposed on the display panel, such that the display panel is between the glass layer and the surface of the first housing and the surface of the second housing, wherein the glass layer comprises: a bendable portion configured to be flat in an unfolded state when the first housing and the second housing are disposed horizontally adjacent, and to be bent in a folded state when the first housing and the second housing are vertically adjacent; and a first flat portion adjacent to the bending portion to form a boundary and a second flat portion disposed to extend from the first flat portion to an edge of the glass layer, wherein the glass layer comprises a glass member, wherein the glass member has a first thickness in the second flat portion, has a second thickness at the center of the bending portion, and has a third thickness less than the first thickness and greater than the second thickness in a section between the first flat portion and the center of the bending portion, and wherein the thickness of the glass member gradually decreases from the first flat portion to the center of the bending portion forming a concave portion.
Abstract:
An electronic device configured to perform speaker verification on a voice input to determine whether the voice input matches a voice of an enrolled speaker, based on determining that the voice input does not match the voice of the enrolled speaker, perform first speech recognition on the voice input based on a first automatic speech recognition (ASR) model, and based on determining that the voice input matches the voice of the enrolled speaker, perform second speech recognition on the voice input based on a sequence summarizing neural network (SSN) and a second ASR model.
Abstract:
An electronic device includes a memory storing instructions; and a processor electrically connected to the memory and configured to execute the instructions to: receive user utterance data; obtain an utterance-domain data set including candidate utterance data that is based on the user utterance data; generate transformed utterance data associated with the user utterance data based on a language model and the utterance-domain data set; and provide a response corresponding to the user utterance data, based on the transformed utterance data. The utterance-domain data set may include at least one candidate utterance data paired with each of a plurality of domains. Each domain of the plurality of domains corresponds to a different operation or function.
Abstract:
An electronic device includes a microphone, a memory and at least one processor. The at least one processor is configured to acquire utterance data corresponding to a voice of a user through the microphone, determine an intent and provide content to the user based on the intent. When intent determination fails, then one or more models are updated using text obtained from the utterance data. The intent determination may fail when there is no verb (predicate) in the user utterance. The models are updated by searching for named entities and determining domains to be used for the model updates. The domains are determined based on categories. The categories are found using a named entity search (NES). Examples of categories are music artists, music albums, movie titles, TV program channels, video clip channels, radio programs, and podcast titles.
Abstract:
A processor is configured to identify a specified event based on data output from one or more sensors. The processor is configured to, in response to identifying occurrence of the specified event, transmit, to an external electronic device connected via a communication circuit, a first signal requesting information associated with both the specified event and a virtual space provided by the external electronic device. The processor is configured to provide, by controlling the display based on receiving a second signal corresponding to the first signal from the external electronic device, information included in the second signal in a state that is executable by a second application different from the first application for displaying the virtual space.
Abstract:
A server for providing content shared to an object is provided. The server is configured to establish communication between a first electronic device of a user and a second electronic device of another user and provide, to the first electronic device and the second electronic device, a virtual space and an object in the virtual space. The server selects, based on an input of the user, at least one content shared to the object in the virtual space by the other user of the second electronic device entering the same virtual space as the first electronic device. The server activates the object to identically output the selected content to at least one electronic device entering the virtual space. Based on the second electronic device leaving the virtual space, the server stops providing the content shared by the other user.
Abstract:
Disclosed are an electronic device and a method of controlling a text-to-speech (TTS) rate. An electronic device may include a processor, and a memory configured to store instructions to be executed by the processor. The processor may receive a voice signal of a user. The processor may calculate a speaking rate of the voice signal based on the voice signal. The processor may generate an output text to be output to the user based on the voice signal. The processor may determine a TTS rate of the output text based on the speaking rate. The processor may convert the output text into voice data based on the TTS rate and output the voice data.