Universal Self-Adaptive Prompting
    21.
    发明申请

    公开(公告)号:US20240394545A1

    公开(公告)日:2024-11-28

    申请号:US18377368

    申请日:2023-10-06

    Applicant: Google LLC

    Abstract: Aspects of the disclosure are directed to methods, systems, and computer readable media for universal self-adaptive prompting (USP), which includes an automatic prompt design approach specifically tailored for zero-shot learning, though still compatible with few-shot learning. To achieve universal prompting, USP categorizes a natural language processing (NLP) task into one of a plurality of possible task types and then uses a corresponding selector to select the most suitable queries and zero-shot model-generated responses as pseudo-demonstrations, thereby generalizing in-context learning to the zero-shot setup in a fully automated manner.

    Customizing Large Language Models For Information Retrieval

    公开(公告)号:US20240378224A1

    公开(公告)日:2024-11-14

    申请号:US18654696

    申请日:2024-05-03

    Applicant: Google LLC

    Abstract: Aspects of the disclosed technology include techniques and mechanisms for customizing large language models (LLMs) for information retrieval (IR). For a plurality of (query, corpus) pairs, an IR adapter may generate embeddings and adapted embeddings associated with each of the query and the corpus. The IR adapter may analyze the adapted embeddings using a similarity function to determine the similarity between the adapted embeddings. The output of the similarity function may be used to determine a correlation between the query and the corpus, wherein the correlation may be fed back into the IR adapter to train an LLM.

    Active Selective Prediction Using Ensembles and Self-training

    公开(公告)号:US20240249204A1

    公开(公告)日:2024-07-25

    申请号:US18419476

    申请日:2024-01-22

    Applicant: Google LLC

    CPC classification number: G06N20/20

    Abstract: A method includes obtaining a set of unlabeled test data samples and, for each respective initial training step, determining a first average output for each unlabeled test data sample using a deep ensemble. For each round of a plurality of rounds, the method includes selecting a subset of unlabeled test data samples based on the determined first average outputs, labeling each respective unlabeled in the subset of unlabeled test data samples, fine-tuning the deep ensemble model using the subset of labeled test data samples, and determining a second average output for each unlabeled test data sample using the fine-tuned deep ensemble model. The method also includes generating, using the set of unlabeled test data samples and the determined second average outputs, a pseudo-labeled set of training data samples. The method also includes training the deep ensemble model using the pseudo-labeled set of training data samples.

    Multi-Layer Perceptron Architecture For Times Series Forecasting

    公开(公告)号:US20240249192A1

    公开(公告)日:2024-07-25

    申请号:US18417556

    申请日:2024-01-19

    Applicant: Google LLC

    CPC classification number: G06N20/00

    Abstract: The present disclosure provides an architecture for time series forecasting. The architecture is based on multi-layer perceptrons (MLPs), which involve stacking linear models with non-linearities between them. In this architecture, the time-domain MLPs and feature-domain MLPs are used to perform both time-domain and feature-domain operations in a sequential manner, alternating between them. In some examples, auxiliary data is used as input, in addition to historical data. The auxiliary data can include known future data points, as well as static information that does not vary with time. The alternation of time-domain and feature-domain operations using linear models allows the architecture to learn temporal patterns while leveraging cross-variate information to generate more accurate time series forecasts.

    Framework for Learning to Transfer Learn
    26.
    发明公开

    公开(公告)号:US20240054345A1

    公开(公告)日:2024-02-15

    申请号:US18455182

    申请日:2023-08-24

    Applicant: Google LLC

    CPC classification number: G06N3/08 G06N3/04

    Abstract: A method includes receiving a source data set and a target data set and identifying a loss function for a deep learning model based on the source data set and the target data set. The loss function includes encoder weights, source classifier layer weights, target classifier layer weights, coefficients, and a policy weight. During a first phase of each of a plurality of learning iterations for a learning to transfer learn (L2TL) architecture, the method also includes: applying gradient decent-based optimization to learn the encoder weights, the source classifier layer weights, and the target classifier weights that minimize the loss function; and determining the coefficients by sampling actions of a policy model. During a second phase of each of the plurality of learning iterations, determining the policy weight that maximizes an evaluation metric.

    DATA VALUATION USING REINFORCEMENT LEARNING
    27.
    发明公开

    公开(公告)号:US20230325675A1

    公开(公告)日:2023-10-12

    申请号:US18333301

    申请日:2023-06-12

    Applicant: Google LLC

    CPC classification number: G06N3/084 G06F17/16 G06N3/08 G06N3/047

    Abstract: A method includes obtaining a batch of training samples. For each particular training sample in the batch of training samples, the method includes generating, using a data value estimator model and the particular training sample, a corresponding predicted value of the particular training sample when used to train a machine learning model. The method includes selecting, based on the corresponding predicted values, a subset of the batch of training samples. For each particular training sample in the subset of the batch of training samples, the method includes determining, using the machine learning model and the particular training sample, a corresponding prediction performance measurement. The method includes adjusting one or more estimator parameter values of the data value estimator model based on the corresponding prediction performance measurements.

    Interpretable Anomaly Detection By Generalized Additive Models With Neural Decision Trees

    公开(公告)号:US20230274154A1

    公开(公告)日:2023-08-31

    申请号:US18113267

    申请日:2023-02-23

    Applicant: Google LLC

    CPC classification number: G06N3/09 G06N3/088 G06N3/0895

    Abstract: Aspects of the disclosure provide for interpretable anomaly detection using a generalized additive model (GAM) trained using unsupervised and supervised learning techniques. A GAM is adapted to detect anomalies using an anomaly detection partial identification (AD PID) loss function for handling noisy or heterogeneous features in model input. A semi-supervised data interpretable anomaly detection (DIAD) system can generate more accurate results over models trained for anomaly detection using strictly unsupervised techniques. In addition, output from the DIAD system includes explanations, for example as graphs or plots, of relatively important input features that contribute to the model output by different factors, providing interpretable results from which the DIAD system can be improved upon.

    Unsupervised Anomaly Detection With Self-Trained Classification

    公开(公告)号:US20220391724A1

    公开(公告)日:2022-12-08

    申请号:US17825788

    申请日:2022-05-26

    Applicant: Google LLC

    Abstract: Aspects of the disclosure provide for methods, systems, and apparatus, including computer-readable storage media, for anomaly detection using a machine learning framework trained entirely on unlabeled training data including both anomalous and non-anomalous training examples. A self-supervised one-class classifier (STOC) refines the training data to exclude anomalous training examples, using an ensemble of machine learning models. The ensemble of models are retrained on the refined training data. The STOC can also use the refined training data to train a representation learning model to generate one or more feature values for each training example, which can be processed by the trained ensemble of models and eventually used for training an output classifier model to predict whether input data is indicative of anomalous or non-anomalous data.

    REINFORCEMENT LEARNING BASED LOCALLY INTERPRETABLE MODELS

    公开(公告)号:US20220327328A1

    公开(公告)日:2022-10-13

    申请号:US17809798

    申请日:2022-06-29

    Applicant: Google LLC

    Abstract: A method for training a locally interpretable model includes obtaining a set of training samples and training a black-box model using the set of training samples. The method also includes generating, using the trained black-box model and the set of training samples, a set of auxiliary training samples and training a baseline interpretable model using the set of auxiliary training samples. The method also includes training, using the set of auxiliary training samples and baseline interpretable model, an instance-wise weight estimator model. For each auxiliary training sample in the set of auxiliary training samples, the method also includes determining, using the trained instance-wise weight estimator model, a selection probability for the auxiliary training sample. The method also includes selecting, based on the selection probabilities, a subset of auxiliary training samples and training the locally interpretable model using the subset of auxiliary training samples.

Patent Agency Ranking