摘要:
When a seal-like recording medium with a built-in memory that stores information specifying a communicates is placed in the close proximity of a medium read/write unit of a communication terminal apparatus and a button is operated, information (operator in case of seal-like medium, and telephone number in case of seal-like medium) stored in the seal-like medium is read out.
摘要:
Various sensors detect conditions outside a robot and an operation applied to the robot, and output the results of detection to a robot-motion-system control section. The robot-motion-system control section determines a behavior state according to a behavior model. A robot-thinking-system control section determines an emotion state according to an emotion model. A speech-synthesizing-control-information selection section determines a field on a speech-synthesizing-control-information table according to the behavior state and the emotion state. A language processing section analyzes in grammar a text for speech synthesizing sent from the robot-thinking-system control section, converts a predetermined portion according to a speech-synthesizing control information, and outputs to a rule-based speech synthesizing section. The rule-based speech synthesizing section synthesizes a speech signal corresponding to the text for speech synthesizing.
摘要:
The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”. Thereafter, the reaction generator 30 controls the read pointer of a buffer 26 controlled by the read controller 29 such that the outputting of the synthesized voice is resumed from the point at which the outputting was stopped. Thus, the synthesized voice “Where is an e, Ouch!, xit?” is output.
摘要:
In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
摘要:
The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
摘要:
In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
摘要:
The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
摘要:
The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it. A waveform generating unit 205 synthesizes the speech waveform based on the prosodic data the parameters of which have been changed.
摘要:
The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
摘要:
The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.