Abstract:
A method includes a computing device a computer device receiving extracted features from a driver-facing camera and from a road as viewed by a road-facing camera; the computer device further receiving extracted features reflecting the driver's behavior including head and eyes movement, speech and gestures; the computer device further receiving extracted telemetry features from a vehicle; the computer device still further receiving extracted features reflecting the driver's biometrics; and a decision engine receiving information from the computer device representing each of the extracted features of the driver, wherein a driver's attention and emotional state is determined to evaluate risks associated to moving vehicles and the driver's ability to deal with any projected risks.
Abstract:
Systems and methods are provided for automatically building a native phonetic lexicon for a speech-based application trained to process a native (base) language, wherein the native phonetic lexicon includes native phonetic transcriptions (base forms) for non-native (foreign) words which are automatically derived from non-native phonetic transcriptions of the non-native words.
Abstract:
A method includes a computing device a computer device receiving extracted features from a driver-facing camera and from a road as viewed by a road-facing camera; the computer device further receiving extracted features reflecting the driver's behavior including head and eyes movement, speech and gestures; the computer device further receiving extracted telemetry features from a vehicle; the computer device still further receiving extracted features reflecting the driver's biometrics; and a decision engine receiving information from the computer device representing each of the extracted features of the driver, wherein a driver's attention and emotional state is determined to evaluate risks associated to moving vehicles and the driver's ability to deal with any projected risks.
Abstract:
Systems and methods are provided for processing and executing commands in automated systems. For example, command processing systems and methods are provided which can automatically determine, evaluate or otherwise predict consequences of execution of misrecognized or misinterpreted user commands in automated systems and thus prevent undesirable or dangerous consequences that can result from execution of misrecognized/misinterpreted commands.
Abstract:
Systems and methods are provided for processing and executing commands in automated systems. For example, command processing systems and methods are provided which can automatically determine, evaluate or otherwise predict consequences of execution of misrecognized or misinterpreted user commands in automated systems and thus prevent undesirable or dangerous consequences that can result from execution of misrecognized/misinterpreted commands.
Abstract:
Techniques for managing vehicular emergencies are disclosed. For example, a method of managing a vehicular emergency includes the steps of collecting biometric data regarding at least one occupant of a vehicle, collecting data regarding at least one operational characteristic of the vehicle, and detecting vehicular emergencies through analysis of at least a portion of the biometric data and the operational characteristic data. This method may also include communicating at least one message relating to the data, wherein the content of the message is determined by the processing device based at least in part on the data and/or controlling a function of the vehicle in response to the data. The method may also include collecting data regarding at least one operational characteristic of at least one proximate vehicle.
Abstract:
In a voice processing system, a multimodal request is received from a plurality of modality input devices, and the requested application is run to provide a user with the feedback of the multimodal request. In the voice processing system, a multimodal aggregating unit is provided which receives a multimodal input from a plurality of modality input devices, and provides an aggregated result to an application control based on the interpretation of the interaction ergonomics of the multimodal input within the temporal constraints of the multimodal input. Thus, the multimodal input from the user is recognized within a temporal window. Interpretation of the interaction ergonomics of the multimodal input include interpretation of interaction biometrics and interaction mechani-metrics, wherein the interaction input of at least one modality may be used to bring meaning to at least one other input of another modality.
Abstract:
Systems and methods for intelligent control of microphones in speech processing applications, which allows the capturing, recording and preprocessing of speech data in the captured audio in a way that optimizes speech decoding accuracy.
Abstract:
Systems and methods are provided for automatically building a native phonetic lexicon for a speech-based application trained to process a native (base) language, wherein the native phonetic lexicon includes native phonetic transcriptions (base forms) for non-native (foreign) words which are automatically derived from non-native phonetic transcriptions of the non-native words.
Abstract:
An improved apparatus and method is provided for operating devices and systems in a motor vehicle, while at the same time reducing vehicle operator distractions. One or more touch sensitive pads are mounted on the steering wheel of the motor vehicle, and the vehicle operator touches the pads in a pre-specified synchronized pattern, to perform functions such as controlling operation of the radio or adjusting a window. At least some of the touch patterns used to generate different commands may be selected by the vehicle operator. Usefully, the system of touch pad sensors and the signals generated thereby are integrated with speech recognition and/or facial gesture recognition systems, so that commands may be generated by synchronized multi-mode inputs.