Abstract:
A method and system for time-based viewing to coordinate airport collaborative decision making (A-CDM) events between a pilot and ground personnel on a cockpit display of an aircraft of generating a time scale with a list of A-CDM events of the aircraft during inbound, turnaround and outbound flight operations to an airport based sensor data contributed by aircraft systems and data of A-CDM event related to airport operations to generate a current time moving window in the time scale for identifying A-CDM events; generating a graphic user interface using the sensor data for positioning the current time moving window on the time scale; and collaboratively communicating with ground personnel decisions based on an aircraft state and a current A-CDM event identified by the positioning of the current time moving window in the time scale to expedite completion of a particular A-CDM event.
Abstract:
A method for controlling an interactive display is provided. The method receives a set of voice input data, via a voice input device communicatively coupled to the interactive display; interprets, by at least one processor, the set of voice input data to produce an interpreted result, wherein the at least one processor is communicatively coupled to the voice input device and the interactive display; presents, by the interactive display, a text representation of the interpreted result coupled to a user-controlled cursor; receives, by a user interface, a user input selection of a textual or graphical element presented by the interactive display, wherein the user interface is communicatively coupled to the at least one processor and the interactive display; and performs, by the at least one processor, an operation associated with the interpreted result and the user input selection.
Abstract:
A method for controlling an interactive display is provided. The method receives a set of voice input data, via a voice input device communicatively coupled to the interactive display; interprets, by at least one processor, the set of voice input data to produce an interpreted result, wherein the at least one processor is communicatively coupled to the voice input device and the interactive display; presents, by the interactive display, a text representation of the interpreted result coupled to a user-controlled cursor; receives, by a user interface, a user input selection of a textual or graphical element presented by the interactive display, wherein the user interface is communicatively coupled to the at least one processor and the interactive display; and performs, by the at least one processor, an operation associated with the interpreted result and the user input selection.
Abstract:
A system and method for providing integrated time-based notification and aircraft status data on a display is provided. The system and method receives aircraft status data, pilot input data, data link notification data and aircraft notification data. The system and method generates and displays a timescale region divided into equal intervals from a predetermined origin with an icon graphically representative of a position of the aircraft overlaid with an icon graphically representative of notification data. The system and method continuously updates the temporal data as the aircraft flies, and responds to user requests to pan forward in time, pan backward in time and adjust the zoom on the timescale.
Abstract:
Integrated controller-pilot datalink communication (CPDLC) systems and methods for operating the same are disclosed. In one implementation, an integrated CPDLC system includes a plurality of CPDLC-enabled avionics devices and a CPDLC context manager coupled with each of the plurality of CPDLC-enabled avionics devices. The CPDLC system further includes a shared CPDLC context memory coupled with the CPDLC context manager and a CPDLC message in/out buffer coupled with the CPDLC context manager.
Abstract:
A system and method for recognizing speech on board an aircraft that compensates for different regional dialects over an area comprised of at least first and second distinct geographical regions, comprises analyzing speech in the first distinct geographical region using speech data characteristics representative of speech in the first distinct geographical region, detecting a change in position from the first distinct geographical region to the second geographical region, and analyzing speech in the second distinct geographical region using speech data characteristics representative of speech in the second distinct geographical region upon detecting that the aircraft has transitioned from the first distinct geographical region to the second distinct geographical region.