Abstract:
Disclosed is a method of editing voice recognition results in a portable device. The method includes a process of converting the voice recognition results into text and displaying the text in a touch panel, a process of recognizing a touch interaction in the touch panel, a process of analyzing an intent of execution of the recognized touch interaction, and a process of editing contents of the text based on the analyzed intent of execution.
Abstract:
A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.
Abstract:
The present invention relates to a system for editing a text of a portable terminal and a method thereof, and more particularly to a technology which edits a text which is input into a portable terminal based on a touch interface. An exemplary embodiment of the present invention provides a text editing system of a portable terminal, including: an interface unit which inputs or outputs a text or voice; a text generating unit which generates the input text or voice as a text; a control unit which provides a keyboard based editing screen or a character recognition based editing screen for the generated text through the interface unit; and a text editing unit which performs an editing command which is input from a user through the keyboard based editing screen or the character recognition based editing screen under the control of the control unit.
Abstract:
A translation verification method using an animation may include the processes of analyzing an originally input sentence in a first language using a translation engine so that the sentence in the first language is converted into a second language, generating an animation capable of representing the meaning of the sentence in the first language based on information on the results of the analysis of the sentence in the first language, and providing the original and the generated animation to a user who uses the original in order for the user to check for errors in the translation.
Abstract:
Provided is a method of generating a language model using crossmodal information. The method includes: receiving language-based first modality information and non-language-based second modality information; converting the first modality information into a first byte sequence; converting the second modality information into a second byte sequence; converting the first and second byte sequences into a first embedding vector and a second embedding vector by applying an embedding technique for each modality; generating semantic association information between first and second modality information by inputting the first and second embedding vectors to a crossmodal transformer; and learning the language model by setting the generated semantic association information as training data.
Abstract:
The present invention provides a method of generating training data to which explicit word-alignment information is added without impairing sub-word tokens, and a neural machine translation method and apparatus including the method. The method of generating training data includes the steps of: (1) separating basic word boundaries through morphological analysis or named entity recognition of a sentence of a bilingual corpus used for learning; (2) extracting explicit word-alignment information from the sentence of the bilingual corpus used for learning; (3) further dividing the word boundaries separated in step (1) into sub-word tokens; (4) generating new source language training data by using an output from the step (1) and an output from the step (3); and (5) generating new target language training data by using the explicit word-alignment information generated in the step (2 ) and the target language outputs from the steps (1) and (3).
Abstract:
The present invention relates to a translation function and discloses an automatic translation operating device, including: at least one of voice input devices which collects voice signals input by a plurality of speakers and a communication module which receives the voice signals; and a control unit which controls to classify voice signals by speakers from the voice signals and cluster the speaker based voice signals classified in accordance with a predefined condition and then perform voice recognition and translation and a method thereof and a system including the same.
Abstract:
Provided is a method of providing an interpretation result using visual information, and the method includes: acquiring a spatial domain image including line-of-sight information of a user and gaze position information in the spatial domain image; segmenting the acquired spatial domain image into a plurality of images; detecting text areas including text for each of the segmented images; generating text blocks, each of which is a text recognition result for each of the detected text areas, and determining the text block corresponding to the gaze position information; converting a first language included in the determined text block into a second language that is a target language; and providing the user with a conversion result of the second language.
Abstract:
The present invention relates to a system for translating a language based on a user's reaction, the system includes an interface unit which inputs uttered sentences of the first user and the second user and outputs the translated result; a translating unit which translates the uttered sentences of the first user and the second user; a conversation intention recognizing unit which figures out a conversation intention of the second user from the reply of the second user for a translation result of the utterance of the first user; a translation result evaluating unit which evaluates the translation of the uttered sentence of the first user based on the conversation intention of the second user which is determined by the conversation intention recognizing unit; and a translation result evaluation storing unit which stores the translation result and an evaluation of the translation result.
Abstract:
The present invention relates to a spoken dialog system and method based on dual dialog management using a hierarchical dialog task library that may increase reutilization of dialog knowledge by constructing and packaging the dialog knowledge based on a task unit having a hierarchical structure, and may construct and process the dialog knowledge using a dialog plan scheme about relationship therebetween by classifying the dialog knowledge based on a task unit to make design of a dialog service convenient, which is different from an existing spoken dialog system in which it is difficult to reuse dialog knowledge since a large amount of construction costs and time is required.