Abstract:
An editing system having a dialogue operation type interface directs a next operation by referring an operation history. User information is inputted by using speech input/output, pointing by a finger and 3-D CG. A human image representing the system is displayed as an agent on the image output device, and a user error, availability to a queue and a utilization environment are extracted by the system and informed to the user through the agent. The system responds to the user intent by the image display or the speech output by using the agent as a medium so that a user friendly interface for graphics edition and image edition is provided.
Abstract:
Inputted sign language word labels and editing items such as speeds and positions of moving portions for specifying manual signs and/or sign gestures corresponding to the respective sign language word labels are displayed on an editing screen. These editing items are modified by the user to add non-language information such as emphasis/feeling information to the contents of communication, thereby generating modified sign language animation information data including the inputted sign language word label string having the added non-language information. For communication or interaction, the non-language information is extracted from the modified sign language animation information data and stored into a memory with the inputted sign language word label string. When a hearing impaired person communicates or interacts with another person through text, the user can emphasize the contents of communication or show the user's feeling for the contents of communication to the other person.
Abstract:
An information management server and information distribution system utilizes a video of the class for creating educational contents matching the learning conditions of the student, and also provides instruction using these same contents. This information server and information distribution system are composed of an accumulator section to store electronic data on the lecture contents, a holding section to hold lecture-related information, a send section to send lecture contents and lecture-related information to the terminal of the student, an analyzer section to analyze electronic data on the lecture contents, and a matcher section to link lecture-related information with lecture contents based on those analysis results, and a control section for selecting lecture contents linked to the lecture-related information based on a reply to the lecture-related information sent from the student terminal, wherein the send section and sends those selected lecture contents to the terminal of the student that sent the reply to the lecture-related information.
Abstract:
The invention, taking an electronic content utilization casing as the basis, extracts input activity to the same casing; estimates data input positions within the content by calculating similarity values and difference values among the same activity data, and similarity values and difference values between the activity data and model data; estimates the input state of the user from the estimated input positions; and presents the same estimation values as the content utilization state.
Abstract:
A prosodic parameter for an input text is computed by storing a sentence of vocalized speech in a speech corpus memory, searching for a stored text having a similar prosody to an input text as a key to the speech corpus and modifying the prosodic parameter based upon the search results. Because a plurality of prosodic parameters are handled as a linking data, a synthesized sound similar to natural speech having a natural intonation and prosody is produced.
Abstract:
An editing system having a dialogue operation type interface directs a next operation by referring an operation history. User information is inputted by using speech input/output, pointing by a finger and 3-D CG. A human image representing the system is displayed as an agent on the image output device, and a user error, availability to a queue and a utilization environment are extracted by the system and informed to the user through the agent. The system responds to the user intent by the image display or the speech output by using the agent as a medium so that a user friendly interface for graphics edition and image edition is provided.
Abstract:
A method of accepting multimedia operation commands wherein, while pointing to either of a display object or a display position on a display screen of a graphics display system through a pointing input device, a user commands the graphics display system to cause an event on a graphics display, through a voice input device; comprising a first step of allowing the user to perform the pointing gesture so as to enter a string of coordinate points which surround one area for either of the display object and any desired display position; a second step of allowing the user to give the voice command together with the pointing gesture; a third step of recognizing a command content of the voice command by a speech recognition process in response to the voice command; a fourth step of recognizing a command content of the pointing gesture in accordance with the recognized result of the third step; and a fifth step of executing the event on the graphics display in accordance with the command contents of the voice command and the pointing gesture. Thus, the method provides a man-machine interface which utilizes the plurality of media of the voice and the pointing gesture, which offers a high operability to the user, and with which an illustration etc. can be easily edited.
Abstract:
The mailing system of the invention converts the text information into movement information using animations, object information, and background information, and sends/receives the converted information as a mail. The system analyzes the information inputted or selected by a user, creates an animation movement by using the analyzed information, and selects an object and a background by using the analyzed information. The system sends/receives the created or selected animation movement information, object information, and background information as a mail, and displays the mail as if the sender and receiver were engaging in a dialogue.
Abstract:
A content is delivered in consideration of a terminal used by a user, an ambient environment of the user and the terminal, and the characteristics and preferences of the user. A content delivery server has an input/output unit for transmitting and receiving information between itself and a terminal, a content management unit for managing contents composed of modalities, and a control unit for controlling the input/output unit and the content management unit. The control unit obtains attribute information composed of terminal attribute information on an output interface at the terminal, environmental attribute information on the current ambient environment of the terminal, and user attribute information on the characteristics of the user, generates, based on the obtained attribute information, modality construction information for specifying the modalities of a content to be delivered to the terminal, determines, by using the modality construction information, a modality construction for the content to be delivered which is under the management of the content management unit, and delivers the content composed of the determined modalitis to the terminal via the input/output unit.
Abstract:
A user-requested command with both visual and hearing information is indicated. Additionally, with a user friendly method, the user understands what content to input. The user-requested command is expressed in the form of a template sentence. Template part in the template sentence is vocalized, and a slot area in the template sentence is expressed using a sound or voice. The user inputs his or her voice to be input to the slot area.