CHARACTER-LEVEL ATTENTION NEURAL NETWORKS
    3.
    发明公开

    公开(公告)号:US20240289552A1

    公开(公告)日:2024-08-29

    申请号:US18564859

    申请日:2022-05-27

    Applicant: Google LLC

    CPC classification number: G06F40/284

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a machine learning task on an input sequence of characters that has a respective character at each of a plurality of character positions to generate a network output. One of the systems includes a neural network configured to perform the machine learning task, the neural network comprising a gradient-based sub-word tokenizer and an output neural network. The gradient-based sub-word tokenizer is configured to apply a learned, i.e., flexible, sub-word tokenization strategy to the input sequence of characters to generate a sequence of latent sub-word representations. The output neural network is configured to process the latent sub-word representation to generate the network output for the task.

    SELF-SUPERVISED CONTRASTIVE LEARNING USING RANDOM FEATURE CORRUPTION

    公开(公告)号:US20220383120A1

    公开(公告)日:2022-12-01

    申请号:US17827448

    申请日:2022-05-27

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network having a plurality of network parameters. One of the methods includes obtaining an unlabeled training input from a set of unlabeled training data; processing the unlabeled training input to generate a first embedding; generating a corrupted version of the unlabeled training input, comprising determining a proper subset of the feature dimensions and, for each feature dimension that is in the proper subset of feature dimensions, applying a corruption to the respective feature in the feature dimension using one or more feature values sampled from a marginal distribution of the feature dimension as specified in the set of unlabeled training data; processing the corrupted version of the unlabeled training input to generate a second embedding; and determining an update to the current values of the plurality of network parameters.

    Systems and Methods for Pretraining Models for Diverse Downstream Tasks

    公开(公告)号:US20250156756A1

    公开(公告)日:2025-05-15

    申请号:US18835666

    申请日:2022-12-30

    Applicant: Google LLC

    Abstract: An example method for pretraining a machine-learned model is provided. The example method includes obtaining a plurality of different combinations of configuration parameters of a pretraining objective framework. The example method includes generating, using the pretraining objective framework, a plurality of corrupted training examples from one or more training examples, wherein the plurality of corrupted training examples are respectively generated according to the plurality of different combinations. The example method includes inputting the plurality of corrupted training examples into the machine-learned model, wherein the machine-learned model is configured to generate uncorrupted subportions corresponding to corrupted subportions of the corrupted training examples. The example method includes obtaining, from the machine-learned model, a plurality of outputs respectively generated by the machine-learned model based on the plurality of corrupted training examples. The example method includes updating one or more parameters of the machine-learned model based on an evaluation of the plurality of outputs.

    Machine Learning Models as a Differentiable Search Index for Directly Predicting Resource Retrieval Results

    公开(公告)号:US20250165469A1

    公开(公告)日:2025-05-22

    申请号:US18837122

    申请日:2023-02-09

    Applicant: Google LLC

    Abstract: Provided are systems and methods for training and/or use of a machine learning model that can directly predict one or more resources that are responsive to a query as an output of the model. In particular, the present disclosure demonstrates that information retrieval can be accomplished with a single machine learning model (e.g., that has a neural network architecture such as, for example, a Transformer architecture) in which all information about the corpus is encoded in the parameters of the model. To this end, the present disclosure introduces the Differentiable Search Index (DSI), a new paradigm that learns a query-to-result (e.g., in text-to-text format) model that will map queries (e.g., text strings) directly to relevant resource identifiers (“docids”) (e.g., text and/or number strings that identify relevant resources); in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying retrieval

    SORTING ATTENTION NEURAL NETWORKS
    10.
    发明申请

    公开(公告)号:US20210248450A1

    公开(公告)日:2021-08-12

    申请号:US17169718

    申请日:2021-02-08

    Applicant: Google LLC

    Abstract: A system for performing a machine learning task on a network input is described. The system includes one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to implement (i) multiple sorting networks in which each sorting network is configured to sort vector blocks in a sequence of vector blocks to generate a sorted sequence of vector blocks; and (ii) a sorting attention neural network configured to perform the machine learning task on the input sequence by executing multiple sorting attention mechanisms using the sorting networks.

Patent Agency Ranking