摘要:
Methods, apparatus, and products are disclosed for establishing a financial market data component in a financial market data system that include: retrieving, by a configuration module from a configuration repository, at least a portion of a global system configuration for a financial market data system, the financial market data system comprising a plurality of financial market data components; identifying, by the configuration module in dependence upon the retrieved portion of the global system configuration, component characteristics of a particular financial market data component in the financial market data system, the component characteristics further comprise a system identifier, a component functional group identifier, and a component business group identifier; and deploying, by the configuration module, the financial market data component in the financial market data system in dependence upon the component characteristics.
摘要:
Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
摘要:
Establishing a preferred mode of interaction between a user and a multimodal application, including evaluating, by a multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, user modal preference, and dynamically configuring multimodal content of the multimodal application in dependence upon the evaluation of user modal preference.
摘要:
Establishing a multimodal personality for a multimodal application, including evaluating, by the multimodal application, attributes of a user's interaction with the multimodal application; selecting, by the multimodal application, a vocal demeanor in dependence upon the values of the attributes of the user's interaction with the multimodal application; and incorporating, by the multimodal application, the vocal demeanor into the multimodal application.
摘要:
Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
摘要:
Establishing a preferred mode of interaction between a user and a multimodal application, including evaluating, by a multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, user modal preference, and dynamically configuring multimodal content of the multimodal application in dependence upon the evaluation of user modal preference.
摘要:
Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
摘要:
Establishing a multimodal advertising personality for a sponsor of a multimodal application, including associating one or more vocal demeanors with a sponsor of a multimodal application and presenting a speech portion of the multimodal application for the sponsor using at least one of the vocal demeanors associated with the sponsor.
摘要:
Enabling dynamic VoiceXML in an X+V page of a multimodal application implemented with the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter, including representing by the multimodal browser an XML element of a VoiceXML dialog of the X+V page as an ECMAScript object, the XML element comprising XML content; storing by the multimodal browser the XML content of the XML element in an attribute of the ECMAScript object; and accessing the XML content of the XML element in the attribute of the ECMAScript object from an ECMAScript script in the X+V page.
摘要:
Dynamically defining a VoiceXML grammar of a multimodal application, implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter, and the method includes loading the X+V page by the multimodal application, from a web server into the multimodal device for execution, the X+V page including one or more VoiceXML grammars in one or more VoiceXML dialogs, including at least one in-line grammar that is declared but undefined; retrieving by the multimodal application a grammar definition for the in-line grammar from the web server without reloading the X+V page; and defining by the multimodal application the in-line grammar with the retrieved grammar definition before executing the VoiceXML dialog containing the in-line grammar.