Systems and methods for contrastive learning of visual representations

    公开(公告)号:US12254413B2

    公开(公告)日:2025-03-18

    申请号:US18343579

    申请日:2023-06-28

    Applicant: Google LLC

    Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.

    END-TO-END SPEECH WAVEFORM GENERATION THROUGH DATA DENSITY GRADIENT ESTIMATION

    公开(公告)号:US20230252974A1

    公开(公告)日:2023-08-10

    申请号:US18010438

    申请日:2021-09-02

    Applicant: Google LLC

    CPC classification number: G10L13/08 G10L21/0208

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating waveforms conditioned on phoneme sequences. In one aspect, a method comprises: obtaining a phoneme sequence; processing the phoneme sequence using an encoder neural network to generate a hidden representation of the phoneme sequence; generating, from the hidden representation, a conditioning input; initializing a current waveform output; and generating a final waveform output that defines an utterance of the phoneme sequence by a speaker by updating the current waveform output at each of a plurality of iterations, wherein each iteration corresponds to a respective noise level, and wherein the updating comprises, at each iteration: processing (i) the current waveform output and (ii) the conditioning input using a noise estimation neural network to generate a noise output; and updating the current waveform output using the noise output and the noise level for the iteration.

    Training policy neural networks using path consistency learning

    公开(公告)号:US11429844B2

    公开(公告)日:2022-08-30

    申请号:US16904785

    申请日:2020-06-18

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network used to select actions to be performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes obtaining path data defining a path through the environment traversed by the agent. A consistency error is determined for the path from a combined reward, first and last soft-max state values, and a path likelihood. A value update for the current values of the policy neural network parameters is determined from at least the consistency error. The value update is used to adjust the current values of the policy neural network parameters.

    Systems and methods for contrastive learning of visual representations

    公开(公告)号:US11354778B2

    公开(公告)日:2022-06-07

    申请号:US16847163

    申请日:2020-04-13

    Applicant: Google LLC

    Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.

    Device placement optimization with reinforcement learning

    公开(公告)号:US10692003B2

    公开(公告)日:2020-06-23

    申请号:US16445330

    申请日:2019-06-19

    Applicant: Google LLC

    Abstract: A method for determining a placement for machine learning model operations across multiple hardware devices is described. The method includes receiving data specifying a machine learning model to be placed for distributed processing on multiple hardware devices; generating, from the data, a sequence of operation embeddings, each operation embedding in the sequence characterizing respective operations necessary to perform the processing of the machine learning model; processing the sequence of operation embeddings using a placement recurrent neural network in accordance with first values of a plurality network parameters of the placement recurrent neural network to generate a network output that defines a placement of the operations characterized by the operation embeddings in the sequence across the plurality of devices; and scheduling the machine learning model for processing by the multiple hardware devices by placing the operations on the multiple devices according to the placement defined by the network output.

    Training sequence generation neural networks using quality scores

    公开(公告)号:US10540585B2

    公开(公告)日:2020-01-21

    申请号:US16421406

    申请日:2019-05-23

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a sequence generation neural network. One of the methods includes obtaining a batch of training examples; for each of the training examples: processing the training network input in the training example using the neural network to generate an output sequence; for each particular output position in the output sequence: identifying a prefix that includes the system outputs at positions before the particular output position in the output sequence, for each possible system output in the vocabulary, determining a highest quality score that can be assigned to any candidate output sequence that includes the prefix followed by the possible system output, and determining an update to the current values of the network parameters that increases a likelihood that the neural network generates a system output at the position that has a high quality score.

    Image Enhancement via Iterative Refinement based on Machine Learning Models

    公开(公告)号:US20250061551A1

    公开(公告)日:2025-02-20

    申请号:US18939994

    申请日:2024-11-07

    Applicant: Google LLC

    Abstract: A method includes receiving, by a computing device, training data comprising a plurality of pairs of images, wherein each pair comprises an image and at least one corresponding target version of the image. The method also includes training a neural network based on the training data to predict an enhanced version of an input image, wherein the training of the neural network comprises applying a forward Gaussian diffusion process that adds Gaussian noise to the at least one corresponding target version of each of the plurality of pairs of images to enable iterative denoising of the input image, wherein the iterative denoising is based on a reverse Markov chain associated with the forward Gaussian diffusion process. The method additionally includes outputting the trained neural network.

Patent Agency Ranking