Abstract:
A system and method for generating an interactive story is disclosed. The system receives an audible input from a user on an electronic device comprising voice of the user reading a story. The system accesses a plurality of pre-determined triggers associated with the story read by the user, wherein the electronic device is configured to cause one or more special effects upon matching the audible input via a voice recognition algorithm and commands one or more special effect to output associated with the story. Interactive sound effects, visual effects integrated with the story book, bring the story to life by adding music, sounds and character voices.
Abstract:
The computer system displays a user interface that represents text information by an improved method of phonics-based approach for teaching reading, especially for those with dyslexia or other neurological disorders. The interface enlarges a line of text to be read by a user and places the cursor/pointer under the first word in the line of text in a direction of reading on the computer screen. The user interacts with the interface by a cursor/pointer. The system calculates the cursor/pointer position and highlights font and background of the traversed part of the text line/element under which the cursor is located. The elements can be letters, syllables and words. In the voiceover mode, the pointer/cursor moves automatically and the user follows the cursor/pointer at the speed suggested by the system and the system voices the word/syllable being read. In the non-voiceover mode, the user drags the cursor along the text line.
Abstract:
A computer program that requires nothing more of a user than to read their choice of textual content in an animated format to increase reading speed via reconditioned reading behavior. The computer program: highlights a word on a page and puts a copy of the word in the center of the page so a reader can fix their eyes on the center of the page and still satisfy the common scanning strategy of repositioning the eyes to project a word on the center of the retina while the reader is reconditioned, by the highlighting of words progressing through the body text, to use the faster scanning strategy of fixing the eyes and changing the position on the retina being read; presents a picture representing the meaning of each word in order to recondition a user to use the faster cognitive strategy of triggering recognition of meaning with a picture; presents textual content with a timing based on syllables in order to create a presentation of textual content in sync with the natural timing of speech; can add a speaking voice to the animated format which enables the non-reader to learn printed language through context alone; can record the audio of an individual word read out loud from a selection of textual content for future playback of any textual content containing the word.
Abstract:
There is described a method for creating an animation, comprising: inserting at least one icon within a text related to the animation, the at least one icon being associated with an action to be performed by one of an entity and a part of an entity, at a point in time corresponding to a position of the at least one icon in the text, and a given feature of an appearance of the at least one icon being associated with one of the entity and the part of the entity; and executing the text and the at least one icon in order to generate the animation.
Abstract:
A wirelessly communicative cuddly toy including a stuffed animal toy having a motorized, articulated robotic interior member disposed therein surrounded by flocculent stuffing material. The robotic interior member includes a transceiver wirelessly communicable by NFC protocol with any appropriate peripheral device. An articulated head member is disposed upon the robotic interior member and configured to animate a mouth of the stuffed animal toy when a speaker, disposed to play audio signals relayed by the transceiver, is activated. Audio executed on the peripheral device is thus playable through the speaker on the robotic interior member and the articulated head member animates the stuffed animal toy in simulation of verbal communication with a user.
Abstract:
System and methods are provided to promote effortless automatic recognition of common sight words. A subject performs a game-like task that generates novel non-verbal visual stimuli that triggers visual attention shifts that enhance foveal and parafoveal recognition of non-verbal and verbal stimuli laterally presented in the right or left visual field. The present invention engages a shared motor-perceptual-cognitive neural network involving oculomotor, visuo-motor and selective executive cognitive behaviors on both brain hemispheres. The present invention has applications to a wide range of non-verbal pre-orthographic visual processes and early lexical processes, not only contributing to enabling reading fluency to dyslexics, reluctant and slow readers, but also to beginning readers. The present invention has wide applications in learning disabilities and normative individuals learning to read.
Abstract:
Method and system for children learning one or more languages is presented. In one embodiment, a method and system comprises looking at pictures, looking at words in one or more forms, looking at texts depicting one or more relations between words, and listening to audios of words, texts to develop the reading skills in a language, in consistent with children's cognitive developments. According to another embodiment, a method and system comprises looking at pictures, reading words in one or more forms of a language, looking at the handwriting form of the word, moving a pointing device or finger following the strokes of the handwriting form, and listening to the audios of words, texts to develop the reading and writing skills in one or more languages.
Abstract:
A computing device, having a user input interface including an input button, processes a sentence array including component words of a target sentence string to determine at least one target word to be read by the user. The computing device detects press and hold by the user of the input button, and in direct response, receives and processes user speech input to recognize at least one spoken word, and upon recognizing the at least one spoken word, determines whether the user has correctly read the at least one target word. The computing device also detects release by the user of the input button, and in direct response, identifies a context position relative to the target sentence string, and processes at least one predefined action based on the identified context position.
Abstract:
In one embodiment a method is provided by which a person learns to read text for which the person may already know the text vocabulary in the spoken language. A photographic image is generated which has an association with a spoken word for which the person is to learn to read a corresponding written word. The digital image is imported into a program running on a microprocessor based device. Associated text is entered into the computing device. At least one instructional flashcard is presented as custom reading material for learning to read text in association with the image.
Abstract:
The present invention provides a learning wall chart using NFC which is used as a learning tool, including: a content display unit in which a plurality of learning content for learning is arranged on a front surface; and a tag array unit in which a plurality of first near field communication (NFC) tags arranged on a rear surface of the content display unit is provided so as to correspond to the learning contents and which stores information corresponding to the learning contents corresponding to the first NFC tag, in which when a terminal having an NFC communication unit, which performs NFC with the first NFC tag, approaches a learning content and a first NFC tag corresponding to the approached learning content is read, information of the first NFC tag is output through an output unit of the terminal, and a learning system using the same.