Abstract:
A method, system and apparatus for text grouping in a disambiguation process. A text grouping method for use in a disambiguation process can include producing a phonetic representation for each entry in a text list, sorting the list according to the phonetic representation, grouping phonetically similar entries in the list, and providing the sorted list with the groupings to the disambiguation process. The producing step can include producing a phonetic representation for each word in the text list. The producing step also can include producing a phonetic representation for each phrase in the text list.
Abstract:
A wizard that from a fixed design can create various audio interfaces. The generated interfaces can be speech only, DTMF only, or various mixed speech and DTMF UIs. When specifying both speech and DTMF prompts, a number of combinations of these interfaces could be automatically generated. Robust speech recognition systems can be built by automatically generating a “shadow” DTMF application. The DTMF application will perform the same task as the primary speech application; however the transfer to a DTMF application could be done explicitly by the user, or could be transferred automatically (either a temporary or permanent transition) at a point in the call flow where there was a problem with the speech recognition.
Abstract:
A system and method provide automated local weather reports for display on wireless communication devices such as telephones. The system provides information for display on a wireless communication device by establishing a communications link with the wireless communication device, determining its location, and transmitting content to it related to its location, such as meteorological information.
Abstract:
A method of adjusting music length to expected waiting time while a caller is on hold includes choosing one or more media selections based upon their play duration and matching the selection(s) to the expected waiting time.
Abstract:
A method and system for defining standard catch styles used in generating speech application code for managing catch events, in which a style-selection menu that allows for selection of one or more catch styles is presented. Each catch style represents a system response to a catch event. A catch style can be selected from the style-selection menu. For each selected catch style, the system can prepare a response for each catch event. If the selected catch style requires playing a new audio message in response to a particular catch event, a contextual message can be entered in one or more text fields. The contextual message entered in each text field corresponds to the new audio message that will be played in response to the particular catch event. In certain catch styles, the entered contextual message is different for each catch event, while in other catch styles, the entered contextual message is the same for each catch event. Finally, if the selected catch style does not require playing of a new audio message in response to a particular catch event, the system can replay the system prompt.
Abstract:
Methods, apparatus, and products are disclosed for adjusting a speech engine for a mobile computing device based on background noise, the mobile computing device operatively coupled to a microphone, that include: sampling, through the microphone, background noise for a plurality of operating environments in which the mobile computing device operates; generating, for each operating environment, a noise model in dependence upon the sampled background noise for that operating environment; and configuring the speech engine for the mobile computing device with the noise model for the operating environment in which the mobile computing device currently operates.
Abstract:
Speech enabled media sharing in a multimodal application including parsing, by a multimodal browser, one or more markup documents of a multimodal application; identifying, by the multimodal browser, in the one or more markup documents a web resource for display in the multimodal browser; loading, by the multimodal browser, a web resource sharing grammar that includes keywords for modes of resource sharing and keywords for targets for receipt of web resources; receiving, by the multimodal browser, an utterance matching a keyword for the web resource, a keyword for a mode of resource sharing and a keyword for a target for receipt of the web resource in the web resource sharing grammar thereby identifying the web resource, a mode of resource sharing, and a target for receipt of the web resource; and sending, by the multimodal browser, the web resource to the identified target for the web resource using the identified mode of resource sharing.
Abstract:
A method for creating and editing an XML-based speech synthesis document for input to a text-to-speech engine is provided. The method includes recording voice utterances of a user reading a pre-selected text and parsing the recorded voice utterances into individual words and periods of silence. The method also includes recording a synthesized speech output generated by a text-to-speech engine, the synthesized speech output being an audible rendering of the pre-selected text, and parsing the synthesized speech output into individual words and periods of silence. The method further includes annotating the XML-based speech synthesis document based upon a comparison of the recorded voice utterances and the recorded synthesized speech output.
Abstract:
The present invention discloses a solution for assuring user-defined voice commands are unambiguous. The solution can include a step of identifying a user attempt to enter a user-defined voice command into a voice-enabled system. A safety analysis can be performed on the user-defined voice command to determine a likelihood that the user-defined voice command will be confused with preexisting voice commands recognized by the voice-enabled system. When a high likelihood of confusion is determined by the safety analysis, a notification can be presented that the user-defined voice command is subject to confusion. A user can then define a different voice command or can choose to continue to use the potentially confusing command, possibly subject to a system imposed confusion mitigating condition or action.
Abstract:
Signaling correspondence between a meeting agenda and a meeting discussion includes: receiving a meeting agenda specifying one or more topics for a meeting; analyzing, for each topic, one or more documents to identify topic keywords for that topic; receiving meeting discussions among participants for the meeting; identifying a current topic for the meeting in dependence upon the meeting agenda; determining a correspondence indicator in dependence upon the meeting discussions and the topic keywords for the current topic, the correspondence indicator specifying the correspondence between the meeting agenda and the meeting discussion; and rendering the correspondence indicator to the participants of the meeting.