-
公开(公告)号:US20240232637A9
公开(公告)日:2024-07-11
申请号:US18491877
申请日:2023-10-23
Applicant: Google LLC
Inventor: Krishna Pragash Srinivasan , Michael Bendersky , Anupam Samanta , Lingrui Liao , Luca Bertelli , Ming-Wei Chang , Iftekhar Naim , Siddhartha Brahma , Siamak Shakeri , Hongkun Yu , John Nham , Karthik Raman , Raphael Dominik Hoffmann
IPC: G06N3/0895 , G06F16/903 , G06F16/93 , G06N3/0455
CPC classification number: G06N3/0895 , G06F16/90335 , G06F16/93 , G06N3/0455
Abstract: Provided are computing systems, methods, and platforms that train query processing models, such as large language models, to perform query intent classification tasks by using retrieval augmentation and multi-stage distillation. Unlabeled training examples of queries may be obtained, and a set of the training examples may be augmented with additional feature annotations to generate augmented training examples. A first query processing model may annotate the retrieval augmented queries to generate inferred labels for the augmented training examples. A second query processing model may be trained on the inferred labels, distilling the query processing model that was trained with retrieval augmentation into a non-retrieval augmented query processing model. The second query processing model may annotate the entire set of unlabeled training examples. Another stage of distillation may train a third query processing model using the entire set of unlabeled training examples without retrieval augmentation.
-
公开(公告)号:US20240135187A1
公开(公告)日:2024-04-25
申请号:US18491877
申请日:2023-10-22
Applicant: Google LLC
Inventor: Krishna Pragash Srinivasan , Michael Bendersky , Anupam Samanta , Lingrui Liao , Luca Bertelli , Ming-Wei Chang , Iftekhar Naim , Siddhartha Brahma , Siamak Shakeri , Hongkun Yu , John Nham , Karthik Raman , Raphael Dominik Hoffmann
IPC: G06N3/0895 , G06F16/903 , G06F16/93 , G06N3/0455
CPC classification number: G06N3/0895 , G06F16/90335 , G06F16/93 , G06N3/0455
Abstract: Provided are computing systems, methods, and platforms that train query processing models, such as large language models, to perform query intent classification tasks by using retrieval augmentation and multi-stage distillation. Unlabeled training examples of queries may be obtained, and a set of the training examples may be augmented with additional feature annotations to generate augmented training examples. A first query processing model may annotate the retrieval augmented queries to generate inferred labels for the augmented training examples. A second query processing model may be trained on the inferred labels, distilling the query processing model that was trained with retrieval augmentation into a non-retrieval augmented query processing model. The second query processing model may annotate the entire set of unlabeled training examples. Another stage of distillation may train a third query processing model using the entire set of unlabeled training examples without retrieval augmentation.
-
公开(公告)号:US20240070456A1
公开(公告)日:2024-02-29
申请号:US18240954
申请日:2023-08-31
Applicant: Google LLC
Inventor: Karthik Raman , Kazuma Hashimoto
IPC: G06N3/08
CPC classification number: G06N3/08
Abstract: Provided are systems and methods for corrective reward optimization for generative sequential labeling. In particular, example aspects of the present disclosure are directed to an effective framework for generative reward optimization of text (or other) data sequences, certain example implementations of which can be referred to as “GROOT”. Example implementations of the proposed framework work by training a generative sequential labeling model to match the decoder output distribution with that of the (possibly black-box) reward function. Using an iterative training regime, the framework can first generate prediction candidates and then correct errors in the candidate. Finally, a loss function can be used that contrasts those candidates based on their reward values (e.g., as measured by a reward function that encodes the specific objectives for a particular setting or application).
-
公开(公告)号:US20210374345A1
公开(公告)日:2021-12-02
申请号:US17336093
申请日:2021-06-01
Applicant: Google LLC
Inventor: Karthik Raman , Liu Yang , Mike Bendersky , Jiecao Chen , Marc Alexander Najork
IPC: G06F40/284 , G06N3/08 , G06N3/04
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a tuple of respective input sequences to generate an output. In one aspect, one of the systems includes a neural network comprising a plurality of encoder neural networks and a head neural network, each encoder neural network configured to: receive a respective input sequence from the tuple; process the respective input sequence using one or more encoder network layers to generate an encoded representation comprising a sequence of tokens; and process each of some or all of the tokens in the sequence of tokens using a projection layer to generate a lower-dimensional representation, and the head neural network configured to: receive lower-dimensional representations of a respective proper subset of the sequence of tokens generated by the encoder neural network; and process the lower-dimensional representations to generate the output.
-
公开(公告)号:US12182509B2
公开(公告)日:2024-12-31
申请号:US17336093
申请日:2021-06-01
Applicant: Google LLC
Inventor: Karthik Raman , Liu Yang , Mike Bendersky , Jiecao Chen , Marc Alexander Najork
IPC: G06F40/284 , G06N3/04 , G06N3/045 , G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a tuple of respective input sequences to generate an output. In one aspect, one of the systems includes a neural network comprising a plurality of encoder neural networks and a head neural network, each encoder neural network configured to: receive a respective input sequence from the tuple; process the respective input sequence using one or more encoder network layers to generate an encoded representation comprising a sequence of tokens; and process each of some or all of the tokens in the sequence of tokens using a projection layer to generate a lower-dimensional representation, and the head neural network configured to: receive lower-dimensional representations of a respective proper subset of the sequence of tokens generated by the encoder neural network; and process the lower-dimensional representations to generate the output.
-
6.
公开(公告)号:US11475290B2
公开(公告)日:2022-10-18
申请号:US15394875
申请日:2016-12-30
Applicant: Google LLC
Inventor: Jeffrey Jon Dalton , Karthik Raman , Tobias Schnabel , Evgeniy Gabrilovich
IPC: G06N3/08 , G06N20/00 , G06F16/9535 , G06F16/951 , G06F16/248
Abstract: The present disclosure provides systems and methods that use machine learning to improve whole-structure relevance of hierarchical informational displays. In particular, the present disclosure provides systems and methods that employ a supervised, discriminative machine learning approach to jointly optimize the ranking of items and their display attributes. One example system includes a machine-learned display selection model that has been trained to jointly select a plurality of items and one or more attributes for each item for inclusion in an informational display. For example, the machine-learned display selection model can optimize a nested submodular objective function to jointly select the items and attributes.
-
-
-
-
-