-
公开(公告)号:WO2020068858A9
公开(公告)日:2020-04-02
申请号:PCT/US2019/052736
申请日:2019-09-24
Applicant: AMAZON TECHNOLOGIES, INC. , BRESLIN, Catherine
Inventor: BRESLIN, Catherine , FEINSTEIN, Jonathan B. , VERMA, Alok , SHABBEER, Amina , DURHAM, Brandon Scott , BUECHE, Edward , MOERCHEN, Fabian , TRIEFENBACH, Fabian , REITER, Klaus , LATIN-STOERMER, Toby R. , KARANASOU, Panagiota , GASPERS, Judith
IPC: G10L15/183 , G10L13/08 , G10L15/00
Abstract: Techniques are provided for training a language recognition model. For example, a language recognition model may be maintained and associated with a reference language (e.g., English). The language recognition model may be configured to accept as input an utterance in the reference language and to identify a feature to be executed in response to receiving the utterance. New language data (e.g., other utterances) provided in a different language (e.g., German) may be obtained. This new language data may be translated to English and utilized to retrain the model to recognize reference language data as well as language data translated to the reference language. Subsequent utterances (e.g., English utterances, or German utterances translated to English) may be provided to the updated model and a feature may be identified. One or more instructions may be sent to a user device to execute a set of instructions associated with the feature.
-
公开(公告)号:WO2020068858A1
公开(公告)日:2020-04-02
申请号:PCT/US2019/052736
申请日:2019-09-24
Applicant: AMAZON TECHNOLOGIES, INC. , BRESLIN, Catherine
Inventor: BRESLIN, Catherine , FEINSTEIN, Jonathan B. , VERMA, Alok , SHABBEER, Amina , DURHAM, Brandon Scott , BUECHE, Edward , MOERCHEN, Fabian , TRIEFENBACH, Fabian , REITER, Klaus , LATIN-STOERMER, Toby R. , KARANASOU, Panagiota , GASPERS, Judith
IPC: G10L15/183 , G10L13/08 , G10L15/00
Abstract: Techniques are provided for training a language recognition model. For example, a language recognition model may be maintained and associated with a reference language (e.g., English). The language recognition model may be configured to accept as input an utterance in the reference language and to identify a feature to be executed in response to receiving the utterance. New language data (e.g., other utterances) provided in a different language (e.g., German) may be obtained. This new language data may be translated to English and utilized to retrain the model to recognize reference language data as well as language data translated to the reference language. Subsequent utterances (e.g., English utterances, or German utterances translated to English) may be provided to the updated model and a feature may be identified. One or more instructions may be sent to a user device to execute a set of instructions associated with the feature.
-
公开(公告)号:WO2022271570A1
公开(公告)日:2022-12-29
申请号:PCT/US2022/034084
申请日:2022-06-17
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: KARLAPATI, Sri Vishnu Kumar , KARANASOU, Panagiota , JOLY, Arnaud Vincent Pierre Yves , MOINET, Alexis Pierre , DRUGMAN, Thomas Renaud , MAKAROV, Petr , BOLLEPALLI, Bajibabu , ABBAS, Syed Ammar , SLANGEN, Simon
IPC: G10L13/08 , G10L15/16 , G10L15/183
Abstract: Techniques for utilizing memory for a neural network are described. For example, some techniques utilize a plurality of memory types to respond to a query from a neural network including a short-term memory to store fine-grained information for recent text of a document and receiving a first value in response, an episodic long-term memory to store information discarded from the short-term memory in a compressed form and receiving a second value in response, and a semantic long-term memory to store relevant facts per entity in the document.
-
-