Abstract:
Disclosed is a method of editing voice recognition results in a portable device. The method includes a process of converting the voice recognition results into text and displaying the text in a touch panel, a process of recognizing a touch interaction in the touch panel, a process of analyzing an intent of execution of the recognized touch interaction, and a process of editing contents of the text based on the analyzed intent of execution.
Abstract:
The present invention relates to an apparatus and method for providing a two-way automatic interpretation and translation service. The apparatus includes a first interpretation and translation unit for interpreting and translating a first language into a second language. A second interpretation and translation unit interprets and translates the second language into the first language. A context information management unit receives conversational context and translation history information, and shares and manages the conversational context and translation history information. Each of the first and second interpretation and translation units provides an interpretation service for receiving an input conversation in the first or second language in speech and outputting results of interpretation in speech in the second or first language, and a translation service for receiving an input conversation in the first or second language in text and outputting results of translation in text in the second or first language.
Abstract:
Disclosed are a system and method for automatically evaluating an essay. The system includes a structure analysis module configured to divide learning data and learner essay text in a predetermined structure analysis unit, generate structure tagging information for each structure analysis unit, and structure the learning data and the learner essay text by attaching the structure tagging information to the learning data and the learner essay text, a learning module configured to generate an essay evaluation model through learning by using essay text that is included in the structured learning data and the structure tagging information as an input value and using an evaluation score that is included in the structured learning data as a label, and an evaluation module configured to generate essay evaluation results using the essay evaluation model.
Abstract:
A translation verification method using an animation may include the processes of analyzing an originally input sentence in a first language using a translation engine so that the sentence in the first language is converted into a second language, generating an animation capable of representing the meaning of the sentence in the first language based on information on the results of the analysis of the sentence in the first language, and providing the original and the generated animation to a user who uses the original in order for the user to check for errors in the translation.
Abstract:
Disclosed are an automatic translation apparatus and method capable of optimizing limited translation knowledge in a database mounted in a portable mobile communication terminal, obtaining translation knowledge from external servers in order to provide translation knowledge appropriate for respective users, and effectively updating the database mounted in the terminal. The automatic translation apparatus includes: an input unit configured to receive translation target information inputted to be translated from a user; a translation unit configured to perform translation for the translation target information based on translation data included in a translation database and to extract translation information generated during the translation process; and a communication unit configured to transmit the translation information to a first server and to receive, from the first server, new translation data being not included in the translation database among various kinds of data necessary for performing the translation for the translation target information.
Abstract:
Disclosed herein is a motion sensor-based portable automatic interpretation apparatus and control method thereof, which can precisely detect the start time and the end time of utterance of a user in a portable automatic interpretation system, thus improving the quality of the automatic interpretation system. The motion sensor-based portable automatic interpretation apparatus includes a motion sensing unit for sensing a motion of the portable automatic interpretation apparatus. An utterance start time detection unit detects an utterance start time based on an output signal of the motion sensing unit. An utterance end time detection unit detects an utterance end time based on an output signal of the motion sensing unit after the utterance start time has been detected.