-
公开(公告)号:US12159628B1
公开(公告)日:2024-12-03
申请号:US17547947
申请日:2021-12-10
Applicant: Amazon Technologies, Inc.
Inventor: Amitabh Saikia , Devesh Mohan Pandey , Tagyoung Chung , Shanchan Wu , Chien-Wei Lin , Govindarajan Sundaram Thattai , Aishwarya Naresh Reganti , Arindam Mandal , Prakash Krishnan , Raefer Christopher Gabriel , Meyyappan Sundaram
IPC: G10L15/183 , G10L13/027 , G10L15/08
Abstract: Techniques for facilitating natural language interactions with visual interactive content are described. During a build time, a system analyzes various websites and applications relating to a particular user goal to understand website and application navigation and information relating to the user goal. The learned information is used to store configuration data. During runtime, when a user request performance of an action, the system engages in a dialog with the user to complete the user's goal. The system uses the stored configuration data to determine actions to be performed at a website or application to complete the user's goal, and determines system responses to present to the user to facilitate completion of the goal. Such system responses may request information from the user, may inform the user of information displayed at the website or application, etc.
-
公开(公告)号:US11922095B2
公开(公告)日:2024-03-05
申请号:US15876858
申请日:2018-01-22
Applicant: Amazon Technologies, Inc.
Inventor: James David Meyers , Shah Samir Pravinchandra , Yue Liu , Arlen Dean , Daniel Miller , Arindam Mandal
CPC classification number: G06F3/167 , G10L15/00 , G10L15/063 , G10L15/1815 , G10L15/22 , G10L15/222 , G10L15/26 , G10L15/32 , G10L2015/088 , G10L2015/223 , G10L2015/226
Abstract: A system may use multiple speech interface devices to interact with a user by speech. All or a portion of the speech interface devices may detect a user utterance and may initiate speech processing to determine a meaning or intent of the utterance. Within the speech processing, arbitration is employed to select one of the multiple speech interface devices to respond to the user utterance. Arbitration may be based in part on metadata that directly or indirectly indicates the proximity of the user to the devices, and the device that is deemed to be nearest the user may be selected to respond to the user utterance.
-
公开(公告)号:US11908468B2
公开(公告)日:2024-02-20
申请号:US17112520
申请日:2020-12-04
Applicant: Amazon Technologies, Inc.
Inventor: Prakash Krishnan , Arindam Mandal , Siddhartha Reddy Jonnalagadda , Nikko Strom , Ariya Rastrow , Ying Shi , David Chi-Wai Tang , Nishtha Gupta , Aaron Challenner , Bonan Zheng , Angeliki Metallinou , Vincent Auvray , Minmin Shen
IPC: G10L25/78 , G10L15/22 , G10L15/24 , G10L15/08 , G10L15/06 , G06V40/20 , G06F3/16 , G10L13/08 , G10L15/20 , G06V40/10 , G06V10/40 , G10L15/02 , G06F18/24
CPC classification number: G10L15/22 , G06F3/167 , G06F18/24 , G06V10/40 , G06V40/10 , G06V40/20 , G10L13/08 , G10L15/02 , G10L15/063 , G10L15/08 , G10L15/20 , G10L15/222 , G10L15/24 , G10L2015/0635 , G10L2015/088 , G10L2015/223 , G10L2015/227
Abstract: A system that is capable of resolving anaphora using timing data received by a local device. A local device outputs audio representing a list of entries. The audio may represent synthesized speech of the list of entries. A user can interrupt the device to select an entry in the list, such as by saying “that one.” The local device can determine an offset time representing the time between when audio playback began and when the user interrupted. The local device sends the offset time and audio data representing the utterance to a speech processing system which can then use the offset time and stored data to identify which entry on the list was most recently output by the local device when the user interrupted. The system can then resolve anaphora to match that entry and can perform additional processing based on the referred to item.
-
公开(公告)号:US11475881B2
公开(公告)日:2022-10-18
申请号:US16932049
申请日:2020-07-17
Applicant: Amazon Technologies, Inc.
Inventor: Arindam Mandal , Kenichi Kumatani , Nikko Strom , Minhua Wu , Shiva Sundaram , Bjorn Hoffmeister , Jeremie Lecomte
Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.
-
公开(公告)号:US20220093093A1
公开(公告)日:2022-03-24
申请号:US17112227
申请日:2020-12-04
Applicant: Amazon Technologies, Inc.
Inventor: Prakash Krishnan , Arindam Mandal , Nikko Strom , Pradeep Natarajan , Ariya Rastrow , Shiv Naga Prasad Vitaladevuni , David Chi-Wai Tang , Aaron Challenner , Xu Zhang , Krishna Anisetty , Josey Diego Sandoval , Rohit Prasad , Premkumar Natarajan
Abstract: A system can operate a speech-controlled device in a mode where the speech-controlled device determines that an utterance is directed at the speech-controlled device using image data showing the user speaking the utterance. If the user is directing the user's gaze at the speech-controlled device while speaking, the system may determine the utterance is system directed and thus may perform further speech processing based on the utterance. If the user's gaze is directed elsewhere, the system may determine the utterance is not system directed (for example directed at another user) and thus the system may not perform further speech processing based on the utterance and may take other actions, for example discarding audio data of the utterance.
-
公开(公告)号:US20210027798A1
公开(公告)日:2021-01-28
申请号:US17022197
申请日:2020-09-16
Applicant: Amazon Technologies, Inc.
Inventor: Shiva Kumar Sundaram , Chao Wang , Shiv Naga Prasad Vitaladevuni , Spyridon Matsoukas , Arindam Mandal
Abstract: A speech-capture device can capture audio data during wakeword monitoring and use the audio data to determine if a user is present nearby the device, even if no wakeword is spoken. Audio such as speech, human originating sounds (e.g., coughing, sneezing), or other human related noises (e.g., footsteps, doors closing) can be used to detect audio. Audio frames are individually scored as to whether a human presence is detected in the particular audio frames. The scores are then smoothed relative to nearby frames to create a decision for a particular frame. Presence information can then be sent according to a periodic schedule to a remote device to create a presence “heartbeat” that regularly identifies whether a user is detected proximate to a speech-capture device.
-
公开(公告)号:US10796716B1
公开(公告)日:2020-10-06
申请号:US16157319
申请日:2018-10-11
Applicant: Amazon Technologies, Inc.
Inventor: Shiva Kumar Sundaram , Chao Wang , Shiv Naga Prasad Vitaladevuni , Spyridon Matsoukas , Arindam Mandal
Abstract: A speech-capture device can capture audio data during wakeword monitoring and use the audio data to determine if a user is present nearby the device, even if no wakeword is spoken. Audio such as speech, human originating sounds (e.g., coughing, sneezing), or other human related noises (e.g., footsteps, doors closing) can be used to detect audio. Audio frames are individually scored as to whether a human presence is detected in the particular audio frames. The scores are then smoothed relative to nearby frames to create a decision for a particular frame. Presence information can then be sent according to a periodic schedule to a remote device to create a presence “heartbeat” that regularly identifies whether a user is detected proximate to a speech-capture device.
-
公开(公告)号:US10304444B2
公开(公告)日:2019-05-28
申请号:US15196540
申请日:2016-06-29
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: Lambert Mathias , Thomas Kollar , Arindam Mandal , Angeliki Metallinou
IPC: G06F17/20 , G10L15/22 , G10L15/26 , G10L15/02 , G10L15/18 , G10L15/14 , G06F16/35 , G06F16/332 , G06F17/27
Abstract: A system capable of performing natural language understanding (NLU) without the concept of a domain that influences NLU results. The present system uses a hierarchical organizations of intents/commands and entity types, and trained models associated with those hierarchies, so that commands and entity types may be determined for incoming text queries without necessarily determining a domain for the incoming text. The system thus operates in a domain agnostic manner, in a departure from multi-domain architecture NLU processing where a system determines NLU results for multiple domains simultaneously and then ranks them to determine which to select as the result.
-
公开(公告)号:US10304440B1
公开(公告)日:2019-05-28
申请号:US15198578
申请日:2016-06-30
Applicant: Amazon Technologies, Inc.
Inventor: Sankaran Panchapagesan , Bjorn Hoffmeister , Arindam Mandal , Aparna Khare , Shiv Naga Prasad Vitaladevuni , Spyridon Matsoukas , Ming Sun
Abstract: An approach to keyword spotting makes use of acoustic parameters that are trained on a keyword spotting task as well as on a second speech recognition task, for example, a large vocabulary continuous speech recognition task. The parameters may be optimized according to a weighted measure that weighs the keyword spotting task more highly than the other task, and that weighs utterances of a keyword more highly than utterances of other speech. In some applications, a keyword spotter configured with the acoustic parameters is used for trigger or wake word detection.
-
公开(公告)号:US11893999B1
公开(公告)日:2024-02-06
申请号:US16055755
申请日:2018-08-06
Applicant: Amazon Technologies, Inc.
Inventor: Sai Sailesh Kopuri , John Moore , Sundararajan Srinivasan , Aparna Khare , Arindam Mandal , Spyridon Matsoukas , Rohit Prasad
Abstract: Techniques for enrolling a user in a system's user recognition functionality without requiring the user speak particular speech are described. The system may determine characteristics unique to a user input. The system may generate an implicit voice profile from user inputs having similar characteristics. After an implicit voice profile is generated, the system may receive a user input having speech characteristics similar to that of the implicit voice profile. The system may ask the user if the user wants the system to associate the implicit voice profile with a particular user identifier. If the user responds affirmatively, the system may request an identifier of a user profile (e.g., a user name). In response to receiving the user's name, the system may identify a user profile associated with the name and associate the implicit voice profile with the user profile, thereby converting the implicit voice profile into an explicit voice profile.
-
-
-
-
-
-
-
-
-