摘要:
The present invention discloses a solution for providing a phonetic representation for a content item along with a content item delivered to a speech enabled computing device. The phonetic representation can be specified in a manner that enables it to be added to a speech recognition grammar of the speech enabled computing device. Thus, the device can recognize speech commands using the newly added phonetic representation that involve the content item. Current implementations of speech recognition systems of this type rely internal generation of speech recognition data that is added to the speech recognition grammar. Generation of speech recognition data can, however, be resource intensive, which can be particularly problematic when the speech enabled device is resource limited. The disclosed solution offloads the task of providing the speech recognition data to an external device, such as a relatively resource rich server or a desktop device.
摘要:
A computer system has a notification manager for playing a message to a user by selecting one of a plurality of audio notifications. The method includes the step of setting a priority level for each notification arriving into a queue. The notification is inserted into a position in the queue based upon the priority level of the notification, such that the audio notifications at the queue top have a generally higher priority than audio notifications at the queue bottom. The notification at the top of the queue can be selected if the priority level of the notification is greater than a predetermined gate level. Once a notification is selected, a message corresponding to the selected notification is played to the user.
摘要:
The present invention discloses creating and using speech recognition grammars of reduced size. The reduced speech recognition grammars can include a set of entries, each entry having a unique identifier and a phonetic representation that is used when matching speech input against the entries. Each entry can lack a textual spelling corresponding to the phonetic representation. The reduced speech recognition grammar can be digitally encoded and stored in a computer readable media, such as a hard drive or flash memory of a portable speech enabled device.
摘要:
A method and a system for improving recall of speech data in a computer speech system can include a plurality of speech cache management steps including providing a speech cache, receiving a speech system input and identifying a speech event in the received speech system input, the speech event comprising speech data. Subsequently, the speech data can be compared to pre-determined speech cache entry criteria; and, if the speech data meets one of the pre-determined entry criteria, at least one entry can be added to the speech cache, the at least one entry corresponding to the speech data. Additionally, the speech data can be compared to pre-determined speech cache exit criteria; and, if the speech data meets one of the pre-determined exit criteria, at least one entry can be purged from the speech cache, the at least one entry corresponding to the speech data. The entry criteria can include frequently used speech data, recently used speech data, and important speech data. Similarly, the exit criteria can include least frequently used speech data associated with each entry in the speech cache, least recently used speech data associated with each entry in the speech cache and least important speed data associated with each entry in the speech cache.
摘要:
The present invention discloses a solution for providing a phonetic representation for a content item along with a content item delivered to a speech enabled computing device. The phonetic representation can be specified in a manner that enables it to be added to a speech recognition grammar of the speech enabled computing device. Thus, the device can recognize speech commands using the newly added phonetic representation that involve the content item. Current implementations of speech recognition systems of this type rely internal generation of speech recognition data that is added to the speech recognition grammar. Generation of speech recognition data can, however, be resource intensive, which can be particularly problematic when the speech enabled device is resource limited. The disclosed solution offloads the task of providing the speech recognition data to an external device, such as a relatively resource rich server or a desktop device.
摘要:
An in-vehicle system that shares speech processing resources among multiple applications located within a vehicle. The system can include one or more software applications, each associated with different functionally independent in-vehicle consoles. Each application can have a console specific user interface. The system can also include a single in-vehicle speech processing system implemented separately from the in-vehicle consoles. The speech processing system can execute speech processing tasks responsive to requests received from the applications. That is, the in-vehicle speech processing system can provide speech processing capabilities for the applications. The provided speech processing capabilities can include text-to-speech capabilities and speech recognition capabilities.
摘要:
A method for implementing speech focus in a speech processing system can include the step of establishing a default focus receiver as a first entity to request speech focus of a speech processing system having multiple applications that share speech resources based upon speech focus. An event occurrence can be detected. An event handler of the default speech receiver can previously define behavior for the event occurrence and where default system behavior can be implemented within the speech processing system for the event occurrence. The default system behavior can be utilized when speech focus is not assigned during the event occurrence. Responsive to the event occurrence, at least one programmatic action can be performed in accordance with machine readable instructions of the event handler. The default system behavior is not implemented responsive to the event occurrence.
摘要:
The present invention discloses a method for semantically processing speech for speech recognition purposes. The method can reduce an amount of memory required for a Viterbi search of an N-gram language model having a value of N greater than two and also having at least one embedded grammar that appears in a multiple contexts to a memory size of approximately a bigram model search space with respect to the embedded grammar. The method also reduces needed CPU requirements. Achieved reductions can be accomplished by representing the embedded grammar as a recursive transition network (RTN), where only one instance of the recursive transition network is used for the contexts. Other than the embedded grammars, a Hidden Markov Model (HMM) strategy can be used for the search space.
摘要:
A computer system has a notification manager for playing a message to a user by selecting one of a plurality of audio notifications. The method includes the step of setting a priority level for each notification arriving into a queue. The notification is inserted into a position in the queue based upon the priority level of the notification, such that the audio notifications at the queue top have a generally higher priority than audio notifications at the queue bottom. The notification at the top of the queue can be selected if the priority level of the notification is greater than a predetermined gate level. Once a notification is selected, a message corresponding to the selected notification is played to the user.
摘要:
The present invention discloses a method for semantically processing speech for speech recognition purposes. The method can reduce an amount of memory required for a Viterbi search of an N-gram language model having a value of N greater than two and also having at least one embedded grammar that appears in a multiple contexts to a memory size of approximately a bigram model search space with respect to the embedded grammar. The method also reduces needed CPU requirements. Achieved reductions can be accomplished by representing the embedded grammar as a recursive transition network (RTN), where only one instance of the recursive transition network is used for the contexts. Other than the embedded grammars, a Hidden Markov Model (HMM) strategy can be used for the search space.