-
公开(公告)号:US20250086462A1
公开(公告)日:2025-03-13
申请号:US18960623
申请日:2024-11-26
Applicant: Google LLC
Inventor: Ting Chen , Simon Komblith , Mohammad Norouzi , Geoffrey Everest Hinton , Kevin Jordan Swersky
IPC: G06N3/084 , G06F18/21 , G06F18/214 , G06F18/241 , G06N3/08 , G06V10/764 , G06V10/774 , G06V10/778
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US20210327029A1
公开(公告)日:2021-10-21
申请号:US16847163
申请日:2020-04-13
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
-
公开(公告)号:US20210319266A1
公开(公告)日:2021-10-14
申请号:US17018372
申请日:2020-09-11
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US12254413B2
公开(公告)日:2025-03-18
申请号:US18343579
申请日:2023-06-28
Applicant: Google LLC
Inventor: Ting Chen , Simon Komblith , Mohammad Norouzi , Geoffrey Everest Hinton , Kevin Jordan Swersky
IPC: G06V10/20 , G06F18/21 , G06F18/214 , G06F18/241 , G06N3/08 , G06N3/084 , G06V10/764 , G06V10/774 , G06V10/778
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US20250053786A1
公开(公告)日:2025-02-13
申请号:US18366638
申请日:2023-08-07
Applicant: Google LLC
Inventor: Ting Chen , Ruixiang Zhang , Geoffrey E. Hinton
IPC: G06N3/0455
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a network output of high dimensional data comprising one or more output tokens. In one aspect, a system comprises a neural network system configured to initialize an analog bit representation of the network output comprising a set of continuous numeric values for each of the output tokens. The neural network system generates an updated analog bit representation that comprises a set of updated continuous numeric values. At each of a plurality of update iterations, the neural network system processes a diffusion input comprising the analog bit representation using a diffusion machine learning model to update the analog bit representation.
-
公开(公告)号:US20240386267A1
公开(公告)日:2024-11-21
申请号:US18668073
申请日:2024-05-17
Applicant: Google LLC
Inventor: Ting Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing data using machine learning models. One of the methods includes obtaining a network input for the time step, wherein the network input comprises a plurality of data tokens; generating, from at least the network input for the time step, a plurality of groups of data tokens; initializing a plurality of sets of latent tokens for the time step, each set corresponding to a respective one of the plurality of groups; processing the data tokens in each group and the plurality of sets of latent tokens through each neural network block in a sequence of neural network blocks; and after processing each group of data tokens and the latent tokens through the sequence of neural network blocks, generating a network output for the time step.
-
公开(公告)号:US20240265586A1
公开(公告)日:2024-08-08
申请号:US18564841
申请日:2022-05-27
Applicant: Google LLC
Inventor: Long Zhao , Han Zhang , Zizhao Zhang , Ting Chen
IPC: G06T11/00 , G06T3/4046
CPC classification number: G06T11/00 , G06T3/4046
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating high-resolution images using self-attention based neural networks. One of the systems includes a neural network configured to generate images, the neural network comprising a sequence of one or more first network blocks followed by a sequence of one or more second network blocks, wherein: each first network block is configured to perform operations comprising: applying a self-attention mechanism over at least a subset of first elements of a first block input to generate an updated first block input; and upsampling the updated first block input to generate a first block output; and each second network block is configured to perform operations comprising: processing a second block input using one or more neural network layers to generate an updated second block input; and upsampling the updated second block input to generate a second block output.
-
公开(公告)号:US11354778B2
公开(公告)日:2022-06-07
申请号:US16847163
申请日:2020-04-13
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
-
公开(公告)号:US20230260652A1
公开(公告)日:2023-08-17
申请号:US18012187
申请日:2021-12-10
Applicant: Google LLC
Inventor: Shekoofeh Azizi , Wen Yau Aaron Loh , Zachary William Beaver , Ting Chen , Jonathan Paul Deaton , Jan Freyberg , Alan Prasana Karthikesalingam , Simon Kornblith , Basil Mustafa , Mohammad Norouzi , Vivek Natarajan , Fiona Keleher Ryan
CPC classification number: G16H50/20 , G06T7/0012 , G06V10/761 , G16H30/40 , G16H50/70 , G06T2207/20081 , G06T2207/20132
Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
-
公开(公告)号:US11847571B2
公开(公告)日:2023-12-19
申请号:US17863070
申请日:2022-07-12
Applicant: Google LLC
Inventor: Ting Chen , Geoffrey Everest Hinton , Simon Kornblith , Mohammad Norouzi
IPC: G06V10/00 , G06N3/084 , G06N3/08 , G06F18/21 , G06F18/241 , G06F18/214 , G06V10/764 , G06V10/774 , G06V10/778
CPC classification number: G06N3/084 , G06F18/2155 , G06F18/2178 , G06F18/241 , G06N3/08 , G06V10/764 , G06V10/7753 , G06V10/7788 , G06T2207/20081
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
-
-
-
-
-
-
-
-