Abstract:
L'invention se rapporte à un procédé de prédiction de l'attention d'au moins un auditoire lors d'une présentation par au moins un locuteur. Le procédé est tel qu'il comporte les étapes de mesures (E25) de caractéristiques vocales ou gestuelles du au moins un locuteur de la présentation en cours et/ou de mesures de caractéristiques de contenu de la présentation en cours, mesure (E26) d'au moins un paramètre de durée ou d'occurrence des caractéristiques mesurées, consultation (E27) d'une base de données comportant une correspondance entre des caractéristiques vocales ou gestuelles de locuteur et/ou des caractéristiques de contenu de présentation, des paramètres de durée ou d'occurrence liés à ces caractéristiques et des informations relatives à l'évolution du niveau d'attention pour ces caractéristiques et ces paramètres et récupération des informations relatives à l'évolution du niveau d'attention correspondant aux mesures effectuées, de présentation (E28) à l'au moins un locuteur de la présentation, d'une prédiction de niveau d'attention à partir des informations relatives à l'évolution du niveau d'attention récupérées. L'invention se rapporte aussi à une phase d'apprentissage pour obtenir les correspondances de la base de données, à un dispositif de prédiction mettant en œuvre le procédé décrit ainsi qu'à un dispositif d'apprentissage mettant en œuvre la phase d'apprentissage.
Abstract:
Example embodiments disclosed herein relate to separated audio analysis and processing. A system for processing an audio signal is disclosed. The system includes an audio analysis module configured to analyze an input audio signal to determine a processing parameter for the input audio signal, the input audio signal being represented in time domain. The system also includes an audio processing module configured to process the input audio signal in parallel with the audio analysis module. The audio processing module includes a time domain filter configured to filter the input audio signal to obtain an output audio signal in the time domain, and a filter controller configured to control a filter coefficient of the time domain filter based on the processing parameter determined by the audio analysis module. Corresponding method and computer program product of processing an audio signal are also disclosed.
Abstract:
A method for executing cryptographically secure transactions using voice and natural language processing is provided. The method comprises executing on a processor the steps of receiving an electronic communication in a computer terminal with a memory module, an authentication module, a parsing module, a digital-to-analog converter, a voice interface module and a ledger module, the electronic communication is a verbal request by a user initiating a cryptographically secure transaction for a commodity of exchange in the form of an audio frequency signal; transforming the audio frequency signal into a digital signal; authenticating the user using the authentication module; parsing the digital signal using the parsing module to identify an intent of the verbal request by the user; determining the intent of the verbal request matches an intent of the computer terminal; and transmitting the commodity of exchange upon confirmation of the intent of the verbal request matching the intent of the computer terminal.
Abstract:
Methods and systems are provided for providing alternative query suggestions. For example, a spoken natural language expression may be received and converted to a textual query by a speech recognition component. The spoken natural language expression may include one or more words, terms, and/or phrases. A phonetically confusable segment of the textual query may be identified by a classifier component. The classifier component may determine at least one alternative query based on identifying at least the phonetically confusable segment of the textual query. The classifier may further determine whether to suggest the at least one alternative query based on whether the at least one alternative query is sensical and/or useful. When it is determined to suggest the at least one alternative query, the at least one alternative query may be provided to and displayed on a user interface display.
Abstract:
Systems and processes for structured dictation using intelligent automated assistants are provided. In one example process, a speech input representing a user request can be received. In addition, metadata associated with the speech input can be received. A text string corresponding to the speech input can be determined. The process can determine whether to perform natural language processing on the text string and whether the metadata identifies one or more domains corresponding to the user request. In response to the determination that natural language processing is to be performed on the text string and that the metadata identifies one or more domains corresponding to the user request, natural language processing of the text string can be constrained to the one or more domains. A result can be obtained based on the one or more domains and the result can be outputted from the electronic device.
Abstract:
A method for executing cinematic direction and dynamic character control via natural language output is provided. The method includes generating a first set of instructions for animation of characters and a second set of instructions for animation of environments; extracting a first set of dialogue elements from a conversant input received in an affective objects module of the processing circuit; extracting a second set of dialogue elements from a natural language system output; analyzing the first and second sets of dialogue elements by an analysis module in the processing circuit for determining emotional content data used to generate an emotional content report; analyzing the first and second sets of dialogue elements by the analysis module in the processing circuit for determining duration data used to generated a duration report; and animating the characters and the environments based on the emotional content report and the duration report.
Abstract:
Systems, methods and apparatus for invoking actions at a second user device from a first user device. A method includes determining that a first user device has an associated second user device; accessing specification data that specifies a set of user device actions that the second user device is configured to perform; receiving command inputs for the first user device; for each command input, determining whether the command input resolves to one of the user device actions; for each command input not determined to resolve to any of the user device actions, causing the command input to be processed at the first user device; and for each command input determined to resolve one of the user device actions causing the first user device to display in a user interface a dialog by which a user may either accept or deny invoking the user device action at the second user device.