Abstract:
A method for controlling an interactive display is provided. The method receives a set of voice input data, via a voice input device communicatively coupled to the interactive display; interprets, by at least one processor, the set of voice input data to produce an interpreted result, wherein the at least one processor is communicatively coupled to the voice input device and the interactive display; presents, by the interactive display, a text representation of the interpreted result coupled to a user-controlled cursor; receives, by a user interface, a user input selection of a textual or graphical element presented by the interactive display, wherein the user interface is communicatively coupled to the at least one processor and the interactive display; and performs, by the at least one processor, an operation associated with the interpreted result and the user input selection.
Abstract:
A method for controlling an interactive display is provided. The method receives a set of voice input data, via a voice input device communicatively coupled to the interactive display; interprets, by at least one processor, the set of voice input data to produce an interpreted result, wherein the at least one processor is communicatively coupled to the voice input device and the interactive display; presents, by the interactive display, a text representation of the interpreted result coupled to a user-controlled cursor; receives, by a user interface, a user input selection of a textual or graphical element presented by the interactive display, wherein the user interface is communicatively coupled to the at least one processor and the interactive display; and performs, by the at least one processor, an operation associated with the interpreted result and the user input selection.
Abstract:
A method for implementing a speaker-independent speech recognition system with reduced latency is provided. The method includes capturing voice data at a carry-on-device from a user during a pre-flight check-in performed by the user for an upcoming flight; extracting features associated with the user from the captured voice data at the carry-on-device; uplinking the extracted features to the speaker-independent speech recognition system onboard the aircraft; and adapting the extracted features with an acoustic feature model of the speaker-independent speech recognition system.
Abstract:
Methods and systems for audio processing using push to talk (PTT) audio attributes to distinguish utterances are provided. The system receives an audio stream comprising two or more utterances. The system includes a control module comprising a processor and a memory, the control module being configured to receive the audio stream; store, in real time, the audio stream in a current buffer in the memory; break the audio stream into a plurality of time segments of equal size; and for each time segment of the plurality of time segments, process the time segment with a push to talk (PTT) audio attribute to look for a PTT event, defined as a PTT button is released; and upon identifying the PTT event, respond to the identified PTT event by, (i) closing the current buffer, (ii) opening a new data storage location, and (iii) defining the new data storage location as the current buffer.
Abstract:
A method for implementing a speaker-independent speech recognition system with reduced latency is provided. The method includes capturing voice data at a carry-on-device from a user during a pre-flight check-in performed by the user for an upcoming flight; extracting features associated with the user from the captured voice data at the carry-on-device; uplinking the extracted features to the speaker-independent speech recognition system onboard the aircraft; and adapting the extracted features with an acoustic feature model of the speaker-independent speech recognition system.
Abstract:
Methods and systems are provided for validation of speech commands from an aircraft pilot. The method comprises receiving a speech command from the aircraft pilot with a voice communication device that is part of a cockpit system. Next, the speech command is decoded into a computer readable format. The decoded speech command is then checked against the present aircraft state as indicated by avionic sensors of the aircraft and against an approved pre-condition that is stored in a pre-condition database. The decoded speech command is validated if the decoded speech command is consistent with the present aircraft state and consistent with the approved pre-condition. An input to the cockpit system is updated to execute the speech command from the aircraft pilot if the decoded speech command is validated.
Abstract:
Methods and systems are provided for validation of speech commands from an aircraft pilot. The method comprises receiving a speech command from the aircraft pilot with a voice communication device that is part of a cockpit system. Next, the speech command is decoded into a computer readable format. The decoded speech command is then checked against the present aircraft state as indicated by avionic sensors of the aircraft and against an approved pre-condition that is stored in a pre-condition database. The decoded speech command is validated if the decoded speech command is consistent with the present aircraft state and consistent with the approved pre-condition. An input to the cockpit system is updated to execute the speech command from the aircraft pilot if the decoded speech command is validated.
Abstract:
A system and method are provided for adaptively processing audio commands supplied by a user in an aircraft cabin, and includes receiving ambient noise in the aircraft cabin via one or more audio input device, sampling, with a processor, the received ambient noise, and analyzing, in the processor, the sampled ambient noise and, based on the analysis, selecting one or more filter functions and adjusting one or more filter parameters associated with the one or more selected filter functions. Audio and ambient noise are selectively received via the one or more audio input devices, and are filtered, through the selected one or more filter functions, to thereby supply filtered audio.
Abstract:
A method for displaying received radio voice messages onboard an aircraft is provided. The method post-processes, by at least one processor onboard the aircraft, a set of speech recognition (SR) hypothetical data to increase accuracy of an associated SR system, by: obtaining, by the at least one processor, secondary source data from a plurality of secondary sources; comparing, by the at least one processor, the set of SR hypothetical data to the secondary source data; and identifying, by the at least one processor, an aircraft tail number using the set of SR hypothetical data and the secondary source data; identifies, by the at least one processor, a subset of the received radio voice messages including the tail number; and presents, via a display device onboard the aircraft, the subset using distinguishing visual characteristics.
Abstract:
Methods and systems are provided for assisting operation of a vehicle using speech recognition and transcription to provide a conversation log graphical user interface (GUI) display. One method involves analyzing a transcription of an audio communication to identify a discrepancy with respect to an expected response to the communication by an operator of the vehicle, generating, on the conversation log GUI display, a selectable GUI element associated with the discrepancy in association with the graphical representation of the transcription of the audio communication, and in response to user selection of the selectable GUI element, providing, on the conversation log GUI display, a graphical representation of information relating to the discrepancy to facilitate challenging the veracity of the transcribed audio communication.