-
公开(公告)号:US20230260652A1
公开(公告)日:2023-08-17
申请号:US18012187
申请日:2021-12-10
Applicant: Google LLC
Inventor: Shekoofeh Azizi , Wen Yau Aaron Loh , Zachary William Beaver , Ting Chen , Jonathan Paul Deaton , Jan Freyberg , Alan Prasana Karthikesalingam , Simon Kornblith , Basil Mustafa , Mohammad Norouzi , Vivek Natarajan , Fiona Keleher Ryan
CPC classification number: G16H50/20 , G06T7/0012 , G06V10/761 , G16H30/40 , G16H50/70 , G06T2207/20081 , G06T2207/20132
Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
-
公开(公告)号:US11354778B2
公开(公告)日:2022-06-07
申请号:US16847163
申请日:2020-04-13
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
-
公开(公告)号:US20220374658A1
公开(公告)日:2022-11-24
申请号:US17863070
申请日:2022-07-12
Applicant: Google LLC
Inventor: Ting Chen , Geoffrey Everest Hinton , Simon Kornblith , Mohammad Norouzi
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US11386302B2
公开(公告)日:2022-07-12
申请号:US17018372
申请日:2020-09-11
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton , Kevin Jordan Swersky
IPC: G06V10/774 , G06K9/62 , G06N3/08
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US12265911B2
公开(公告)日:2025-04-01
申请号:US17121161
申请日:2020-12-14
Applicant: Google LLC
Inventor: Gamaleldin Elsayed , Prajit Ramachandran , Jon Shlens , Simon Kornblith
Abstract: A computing system can include one or more non-transitory computer-readable media that collectively store a neural network including one or more layers with relaxed spatial invariance. Each of the one or more layers can be configured to receive a respective layer input. Each of the one or more layers can be configured to convolve a plurality of different kernels against the respective layer input to generate a plurality of intermediate outputs, each of the plurality of intermediate outputs having a plurality of portions. Each of the one or more layers can be configured to apply, for each of the plurality of intermediate outputs, a respective plurality of weights respectively associated with the plurality of portions to generate a respective weighted output. Each of the one or more layers can be configured to generate a respective layer output based on the weighted outputs.
-
公开(公告)号:US11847571B2
公开(公告)日:2023-12-19
申请号:US17863070
申请日:2022-07-12
Applicant: Google LLC
Inventor: Ting Chen , Geoffrey Everest Hinton , Simon Kornblith , Mohammad Norouzi
IPC: G06V10/00 , G06N3/084 , G06N3/08 , G06F18/21 , G06F18/241 , G06F18/214 , G06V10/764 , G06V10/774 , G06V10/778
CPC classification number: G06N3/084 , G06F18/2155 , G06F18/2178 , G06F18/241 , G06N3/08 , G06V10/764 , G06V10/7753 , G06V10/7788 , G06T2207/20081
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
公开(公告)号:US20240169715A1
公开(公告)日:2024-05-23
申请号:US18518075
申请日:2023-11-22
Applicant: GOOGLE LLC
Inventor: Lucas Klaus Beyer , Pavel Izmailov , Simon Kornblith , Alexander Kolesnikov , Mathilde Caron , Xiaohua Zhai , Matthias Johannes Lorenz Minderer , Ibrahim Alabdulmohsin , Michael Tobias Tschannen , Filip Pavetic
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network that is configured to process an input image to generate a network output for the input image. In one aspect, a method comprises, at each of a plurality of training steps: obtaining a plurality of training images for the training step; obtaining, for each of the plurality of training images, a respective target output; and selecting, from a plurality of image patch generation schemes, an image patch generation scheme for the training step, wherein, given an input image, each of the plurality of image patch generation schemes generates a different number of patches of the input image, and wherein each patch comprises a respective subset of the pixels of the input image.
-
公开(公告)号:US11475277B2
公开(公告)日:2022-10-18
申请号:US15931106
申请日:2020-05-13
Applicant: Google LLC
Inventor: Gamaleldin Elsayed , Simon Kornblith , Quoc V. Le
Abstract: Generally, the present disclosure is directed to novel machine-learned classification models that operate with hard attention to make discrete attention actions. The present disclosure also provides a self-supervised pre-training procedure that initializes the model to a state with more frequent rewards. Given only the ground truth classification labels for a set of training inputs (e.g., images), the proposed models are able to learn a policy over discrete attention locations that identifies certain portions of the input (e.g., patches of the images) that are relevant to the classification. In such fashion, the models are able to provide high accuracy classifications while also providing an explicit and interpretable basis for the decision.
-
公开(公告)号:US20210327029A1
公开(公告)日:2021-10-21
申请号:US16847163
申请日:2020-04-13
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
-
公开(公告)号:US20210319266A1
公开(公告)日:2021-10-14
申请号:US17018372
申请日:2020-09-11
Applicant: Google LLC
Inventor: Ting Chen , Simon Kornblith , Mohammad Norouzi , Geoffrey Everest Hinton
Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
-
-
-
-
-
-
-
-
-