Abstract:
성조 언어의 음소 문자열의 성조 를 각 성조별로 배정되는 성조별 컬러 로 나타냄으로써 연상화시키는 방법으로서, 음소 문자열 이 특정되는 단계와, 성조 가 특정되는 단계와, 성조별 컬러 가 특정되는 단계와, 상기 음소 문자열이나 연상이미지 의 전경 또는 배경 또는 주변부 의 적어도 일부에 채색되어, 채색 연상이미지나 음소 문자열 이 생성되는 단계와, 상기 채색 연상이미지나 음소 문자열이 화면출력 되는 단계를 포함하여 이루어지고, 상기 성조별 컬러는, 2 이상이 성조별 컬러 저장부 에 저장되고, 각 상기 성조별 컬러 세트마다 성조-컬러간 연상작용을 돕는 성조별 컬러 연상용 스토리 가 성조별 컬러 연상용 스토리 데이터베이스 에 저장되며, 상기 성조별 컬러 연상용 스토리 데이터베이스로부터 선택된 성조별 컬러 연상용 스토리에 따라 정해지는 성조별 컬러 세트로부터 상기 성조별 컬러가 특정된다.
Abstract:
A method of enriching text includes receiving a text file and parsing the received text file into logical phrases each having a phrase type. The logical phrases are processed based on their respective phrase types. A first processing step determines whether to process each logical phrase as a whole or in parts, and further identifies, splits or combines phrases according to pre-defined logic to determine a contextual meaning for each logical phrase. Additional processing steps determine a contextual part of speech for each word in the logical phrases and identify enrichment content pertaining to each of the words and the logical phrases. The words and logical phrases are associated and stored with the enrichment content respectively pertaining thereto such that the enrichment content is renderable on a user computing device when the word or logical phrase associated therewith is selected by a user on the user computing device.
Abstract translation:丰富文本的方法包括接收文本文件并将接收到的文本文件解析为各自具有短语类型的逻辑短语。 逻辑短语是根据各自的短语类型进行处理的。 第一处理步骤确定是将每个逻辑短语作为整体还是部分地处理,并且进一步根据预定义的逻辑来标识,分割或组合词组以确定每个逻辑短语的上下文含义。 额外的处理步骤为逻辑短语中的每个单词确定语境的语境部分,并识别与每个单词和逻辑短语有关的丰富内容。 词语和逻辑短语与分别与其有关的丰富内容相关联并存储,使得当用户在用户计算设备上选择与其相关联的词或逻辑短语时,可以在用户计算设备上呈现丰富内容。 p >
Abstract:
An enunciation system (ES) and method according to the present disclosure enables users to gain acquaintance, understanding, and mastery of the relationship between letters and sounds in the context of an alphabetic writing system. An ES as disclosed herein enables the user to experience the action of sounding out a word, before their own phonics knowledge enables them to sound out the word independently; its continuous, unbroken speech output or input avoids the common confusions that ensue from analyzing words by breaking them up into discrete sounds; its user-controlled pacing allows the user to slow down enunciation at specific points of difficulty within the word; and its real-time touch control allows the written word to be "played" like a musical instrument, with expressive and aesthetic possibilities.
Abstract:
Among other things, a succession of conversations are facilitated between a user of a device and a non-human companion portrayed on the device, to develop a relationship between the user and the non-human companion over a time period that spans the successive conversations. The relationship is developed between the user and the non-human companion to cause a change in a state of the user over the time period. Each of the successive conversations is facilitated by actions that include the following. A segment of speech of the non-human companion is presented to the user. A segment of speech of the user is detected. The user's segment of speech and the segment of speech presented to the user include portion of the conversation. At the device, information is received from an intelligent agent about a next segment of speech to be presented.
Abstract:
The present disclosure relates to computing technologies for diagnosis and therapy of language-related disorders. Such technologies enable computer-generated diagnosis and computer-generated therapy delivered over a network to at least one computing device. The diagnosis and therapy are customized for each patient through a comprehensive analysis of the patient's production and reception errors, as obtained from the patient over the network, together with a set of correct responses at each phase of evaluation and therapy.
Abstract:
Typical speech recognition systems usually use speaker-specific speech data to apply speaker adaptation to models and parameters associated with the speech recognition system. Given that speaker-specific speech data may not be available to the speech recognition system, information indicative of language skills is employed in adapting configurations of a speech recognition system. According to at least one example embodiment, a method and corresponding apparatus, for speech recognition comprise maintaining information indicative of language skills of users of the speech recognition system. A configuration of the speech recognition system for a user is determined based at least in part on corresponding information indicative of language skills of the user. Upon receiving speech data from the user, the configuration of the speech recognition system determined is employed in performing speech recognition.
Abstract:
The present invention provides a pronunciation correction method for assisting a foreign language learner in correcting a position of a tongue or a shape of lips when pronouncing a foreign language. According to a implementation of this invention, the pronunciation correction method comprises receiving an audio signal constituting pronunciation of a user for a phonetic symbol selected as a target to be practiced, analyzing the audio signal, generating a tongue position image according to the audio signal based on the analysis results, and displaying the generated tongue position image.
Abstract:
본 발명은 학습용 마스크 디스플레이 장치 및 학습용 마스크 표시 방법에 관한 것이다. 본 발명의 일실시예에 따른 학습용 마스크 디스플레이 장치는, 전자문서를 표시하는 디스플레이부와, 사용자의 명령을 입력받는 사용자 명령 입력부와, 사용자 명령 입력부에서 입력된 사용자의 마스크 표시 명령에 따라 디스플레이부에 표시된 전자문서의 적어도 일부분을 가리는 마스크 레이어를 표시하는 제어부를 포함하고, 마스크 레이어는 전자문서의 일부가 그대로 표시되도록 하는 적어도 하나 이상의 투명부와 전자문서의 일부가 흐리게 표시되거나, 표시되지 않도록 하는 적어도 하나 이상의 컬러부로 구성될 수 있다.
Abstract:
In the preferred embodiment, a student is able to talk to a pre-recorded actor displayed on a workstation/computer and have the pre-recorded actor respond directly and precisely to the things that the learner is saying, resulting in a fluid and challenging practice conversation without the use of artificial intelligence. A second user (partner) observes the first user (student) either directly or via videoconference or other means and monitors his interaction with the simulated user/actor in the pre-recorded video displayed on the first user's computer screen. The partner has access to an interface that allows her to direct the simulated user/actor by interpreting the first user's communications and selecting among presented options. These options direct the simulated user/actor to respond to the first user's communication in a convincing and helpful manner.