-
公开(公告)号:US20220148299A1
公开(公告)日:2022-05-12
申请号:US17438687
申请日:2019-07-19
Applicant: Google LLC
Inventor: Mikael Pierre Bonnevie , Aaron Maschinot , Aaron Sarna , Shuchao Bi , Jingbin Wang , Michael Spencer Krainin , Wenchao Tong , Dilip Krishnan , Haifeng Gong , Ce Liu , Hossein Talebi , Raanan Sayag , Piotr Teterwak
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating realistic extensions of images. In one aspect, a method comprises providing an input that comprises a provided image to a generative neural network having a plurality of generative neural network parameters. The generative neural network processes the input in accordance with trained values of the plurality of generative neural network parameters to generate an extended image. The extended image has (i) more rows, more columns, or both than the provided image, and (ii) is predicted to be a realistic extension of the provided image. The generative neural network is trained using an adversarial loss objective function.
-
公开(公告)号:US20230153629A1
公开(公告)日:2023-05-18
申请号:US17920623
申请日:2021-04-12
Applicant: Google LLC
Inventor: Dilip Krishnan , Prannay Khosla , Piotr Teterwak , Aaron Yehuda Sarna , Aaron Joseph Maschinot , Ce Liu , Philip John Isola , Yonglong Tian , Chen Wang
IPC: G06N3/09 , G06V10/74 , G06V10/776 , G06V10/82
CPC classification number: G06N3/09 , G06V10/761 , G06V10/776 , G06V10/82
Abstract: The present disclosure provides an improved training methodology that enables supervised contrastive learning to be simultaneously performed across multiple positive and negative training examples. In particular, example aspects of the present disclosure are directed to an improved, supervised version of the batch contrastive loss, which has been shown to be very effective at learning powerful representations in the self-supervised setting Thus, the proposed techniques adapt contrastive learning to the fully supervised setting and also enable learning to occur simultaneously across multiple positive examples.
-
公开(公告)号:US20210326660A1
公开(公告)日:2021-10-21
申请号:US17235992
申请日:2021-04-21
Applicant: Google LLC
Inventor: Dilip Krishnan , Prannay Khosla , Piotr Teterwak , Aaron Yehuda Sarna , Aaron Joseph Maschinot , Ce Liu , Phillip John Isola , Yonglong Tian , Chen Wang
Abstract: The present disclosure provides an improved training methodology that enables supervised contrastive learning to be simultaneously performed across multiple positive and negative training examples. In particular, example aspects of the present disclosure are directed to an improved, supervised version of the batch contrastive loss, which has been shown to be very effective at learning powerful representations in the self-supervised setting. Thus, the proposed techniques adapt contrastive learning to the fully supervised setting and also enable learning to occur simultaneously across multiple positive examples.
-
公开(公告)号:US12236676B2
公开(公告)日:2025-02-25
申请号:US17438687
申请日:2019-07-19
Applicant: Google LLC
Inventor: Mikael Pierre Bonnevie , Aaron Maschinot , Aaron Sarna , Shuchao Bi , Jingbin Wang , Michael Spencer Krainin , Wenchao Tong , Dilip Krishnan , Haifeng Gong , Ce Liu , Hossein Talebi , Raanan Sayag , Piotr Teterwak
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating realistic extensions of images. In one aspect, a method comprises providing an input that comprises a provided image to a generative neural network having a plurality of generative neural network parameters. The generative neural network processes the input in accordance with trained values of the plurality of generative neural network parameters to generate an extended image. The extended image has (i) more rows, more columns, or both than the provided image, and (ii) is predicted to be a realistic extension of the provided image. The generative neural network is trained using an adversarial loss objective function.
-
公开(公告)号:US11347975B2
公开(公告)日:2022-05-31
申请号:US17235992
申请日:2021-04-21
Applicant: Google LLC
Inventor: Dilip Krishnan , Prannay Khosla , Piotr Teterwak , Aaron Yehuda Sarna , Aaron Joseph Maschinot , Ce Liu , Phillip John Isola , Yonglong Tian , Chen Wang
Abstract: The present disclosure provides an improved training methodology that enables supervised contrastive learning to be simultaneously performed across multiple positive and negative training examples. In particular, example aspects of the present disclosure are directed to an improved, supervised version of the batch contrastive loss, which has been shown to be very effective at learning powerful representations in the self-supervised setting. Thus, the proposed techniques adapt contrastive learning to the fully supervised setting and also enable learning to occur simultaneously across multiple positive examples.
-
-
-
-