摘要:
A method and apparatus for speaker independent real-time affect detection includes generating (205) a sequence of audio frames from a segment of speech, generating (210) a sequence of feature sets by generating a feature set for each frame, and applying (215) the sequence of feature sets to a sequential classifier to determine a most likely affect expressed in the segment of speech.
摘要:
A tailored speaker-independent voice recognition system has a speech recognition dictionary (360) with at least one word (371). That word (371) has at least two transcriptions (373), each transcription (373) having a probability factor (375) and an indicator (377) of whether the transcription is active. When a speech utterance is received (510), the voice recognition system determines (520, 530) the word signified by the speech utterance, evaluates (540) the speech utterance against the transcriptions of the correct word, updates (550) the probability factors for each transcription, and inactivates (570) any transcription that has an updated probability factor that is less than a threshold.
摘要:
A dictionary is comprised of a dendroid hierarchy of branches and nodes, wherein each node represents no more than one symbol (which symbol is to be converted to a corresponding sound) and wherein each such symbol as is represented at a given node has only one corresponding sound associated with that symbol at that node. In addition, many of the branches include a plurality of nodes representing a string of the symbols in a particular sequence. The dictionary is used to translate an input comprising a given integral sequence of the symbols into a corresponding integral sequence of sounds. This permits both method and apparatus to convert, for example, text to representative phonemes. Such phonemes can be used, amongst other purposes, to support synthesized speech production.
摘要:
A method, system and communication device for enabling voice-to-voice searching and ordered content retrieval via audio tags assigned to individual content, which tags generate uniterms that are matched against components of a voice query. The method includes storing content and tagging at least one of the content with an audio tag. The method further includes receiving a voice query to retrieve content stored on the device. When the voice query is received, the method completes a voice-to-voice search utilizing uniterms of the audio tag, scored against the phoneme latent lattice model generated by the voice query to identify matching terms within the audio tags and corresponding stored content. The retrieved content(s) associated with the identified audio tags having uniterms that score within the phoneme lattice model are outputted in an order corresponding to an order in which the uniterms are structured within the voice query.
摘要:
During operation, a “coarse search” stage applies variable-scale windowing on the query pitch contours to compare them with fixed-length segments of target pitch contours to find matching candidates while efficiently scanning over variable tempo differences and target locations. Because the target segments are of fixed-length, this has the effect of drastically reducing the storage space required in a prior-art method. Furthermore, by breaking the query contours into parts, rhythmic inconsistencies can be more flexibly handled. Normalization is also applied to the contours to allow comparisons independent of differences in musical key. In a “fine search” stage, a “segmental” dynamic time warping (DTW) method is applied that calculates a more accurate similarity score between the query and each candidate target with more explicit consideration toward rhythmic inconsistencies.
摘要:
A search system will receive a voice query and use speech recognition with a predefined vocabulary to generate a textual transcription of the voice query. Queries are sent to a text search engine, retrieving multiple web page results for each of these initial text queries. The collection of the keywords is extracted from the resulting web pages and is phonetically indexed to form a voice query dependent and phonetically searchable index database. Finally, a phonetically-based voice search engine is used to search the original voice query against the voice query dependent and phonetically searchable index database to find the keywords and/or key phrases that best match what was originally spoken. The keywords and/or key phrases that best match what was originally spoken are then used as a final text query for a search engine. Search results from the final text query are then presented to the user.
摘要:
Disclosed is a method for parsing a verbal expression received from a user to determine whether or not the expression contains a multiple-goal command. Specifically, known techniques are applied to extract terms from the verbal expression. The extracted terms are assigned to categories. If two or more terms are found in the parsed verbal expression that are in associated categories and that do not overlap one another temporally, then the confidence levels of these terms are compared. If the confidence levels are similar, then the terms may be parallel entries in the verbal expression and may represent multiple goals. If a multiple-goal command is found, then the command is either presented to the user for review and possible editing or is executed. If the parsed multiple-goal command is presented to the user for review, then the presentation can be made via any appropriate interface including voice and text interfaces.
摘要:
Disclosed are editing methods that are added to speech-based searching to allow users to better understand textual queries submitted to a search engine and to easily edit their speech queries. According to some embodiments, the user begins to speak. The user's speech is translated into a textual query and submitted to a search engine. The results of the search are presented to the user. As the user continues to speak, the user's speech query is refined based on the user's further speech. The refined speech query is converted to a textual query which is again submitted to the search engine. The refined results are presented to the user. This process continues as long as the user continues to refine the query. Some embodiments present the textual query to the user and allow the user to use both speech-based and non-speech-based tools to edit the textual query.
摘要:
A method and apparatus for generating a voice tag (140) includes a means (110) for combining (205) a plurality of utterances (106, 107, 108) into a combined utterance (111) and a means (120) for extraction (210) of the voice tag as a sequence of phonemes having a high likelihood of representing the combined utterance, using a set of stored phonemes (115) and the combined utterance.
摘要:
A method, a system and a computer program product for interpreting a verbal input in a multimodal dialog system are provided. The method includes assigning (302) a confidence value to at least one word generated by a verbal recognition component. The method further includes generating (304) a semantic unit confidence score for the verbal input. The generation of a semantic unit confidence score is based on the confidence value of at least one word and at least one semantic confidence operator.