-
21.
公开(公告)号:US20210264207A1
公开(公告)日:2021-08-26
申请号:US16802243
申请日:2020-02-26
Applicant: ADOBE INC.
Inventor: Cameron Smith , Yannick Hold-Geoffroy , Mariia Drozdova
Abstract: Images can be edited to include features similar to a different target image. An unconditional generative adversarial network (GAN) is employed to edit features of an initial image based on a constraint determined from a target image. The constraint used by the GAN is determined from keypoints or segmentation masks of the target image, and edits are made to features of the initial image based on keypoints or segmentation masks of the initial image corresponding to those of the constraint from the target image. The GAN modifies the initial image based on a loss function having a variable for the constraint. The result of this optimization process is a modified initial image having features similar to the target image subject to the constraint determined from the identified keypoints or segmentation masks.
-
公开(公告)号:US20200074682A1
公开(公告)日:2020-03-05
申请号:US16675641
申请日:2019-11-06
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Matthew David Fisher , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.
-
公开(公告)号:US10515460B2
公开(公告)日:2019-12-24
申请号:US15826331
申请日:2017-11-29
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Matthew David Fisher , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.
-
公开(公告)号:US12254589B2
公开(公告)日:2025-03-18
申请号:US18055716
申请日:2022-11-15
Applicant: Adobe Inc.
Inventor: Mohammad Reza Karimi Dastjerdi , Yannick Hold-Geoffroy , Vladimir Kim , Jonathan Eisenmann , Jean-François Lalonde
IPC: G06T3/04 , G06T3/18 , G06T3/4023 , G06T3/4046 , G06T7/00 , G06V10/774 , G06V10/776
Abstract: Embodiments are disclosed for generating 360-degree panoramas from input narrow field of view images. A method of generating 360-degree panoramas may include obtaining an input image and guide, generating a panoramic projection of the input image, and generating, by a panorama generator, a 360-degree panorama based on the panoramic projection and the guide, wherein the panorama generator is a guided co-modulation generator network trained to generate a 360-degree panorama from the input image based on the guide.
-
25.
公开(公告)号:US20240135612A1
公开(公告)日:2024-04-25
申请号:US18304113
申请日:2023-04-20
Applicant: Adobe Inc.
Inventor: Yannick Hold-Geoffroy , Vojtech Krs , Radomir Mech , Nathan Carr , Matheus Gadelha
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.
-
26.
公开(公告)号:US20240127509A1
公开(公告)日:2024-04-18
申请号:US18304134
申请日:2023-04-20
Applicant: Adobe Inc.
Inventor: Yannick Hold-Geoffroy , Jianming Zhang , Byeonguk Lee
CPC classification number: G06T11/60 , G06T3/4046
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.
-
公开(公告)号:US11854115B2
公开(公告)日:2023-12-26
申请号:US17519117
申请日:2021-11-04
Applicant: Adobe Inc.
Inventor: Daichi Ito , Yijun Li , Yannick Hold-Geoffroy , Koki Madono , Jose Ignacio Echevarria Vallespi , Cameron Younger Smith
CPC classification number: G06T11/00 , G06T7/10 , G06V40/171 , G06T2207/20081 , G06T2207/20092 , G06T2207/30201
Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.
-
公开(公告)号:US20230360170A1
公开(公告)日:2023-11-09
申请号:US18055716
申请日:2022-11-15
Applicant: Adobe Inc.
Inventor: Mohammad Reza KARIMI DASTJERDI , Yannick Hold-Geoffroy , Vladimir KIM , Jonathan EISENMANN , Jean-François LALONDE
IPC: G06T3/40 , G06T7/00 , G06V10/776 , G06T3/00 , G06V10/774
CPC classification number: G06T3/4023 , G06T3/0012 , G06T3/0093 , G06T3/4046 , G06T7/0002 , G06V10/774 , G06V10/776 , G06T2207/20081 , G06T2207/20084 , G06T2207/30168
Abstract: Embodiments are disclosed for generating 360-degree panoramas from input narrow field of view images. A method of generating 360-degree panoramas may include obtaining an input image and guide, generating a panoramic projection of the input image, and generating, by a panorama generator, a 360-degree panorama based on the panoramic projection and the guide, wherein the panorama generator is a guided co-modulation generator network trained to generate a 360-degree panorama from the input image based on the guide.
-
29.
公开(公告)号:US20220114365A1
公开(公告)日:2022-04-14
申请号:US17068429
申请日:2020-10-12
Applicant: ADOBE INC.
Inventor: Michal Lukác , Oliver Wang , Jan Brejcha , Yannick Hold-Geoffroy , Martin Cadík
Abstract: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.
-
公开(公告)号:US20200186714A1
公开(公告)日:2020-06-11
申请号:US16789195
申请日:2020-02-12
Applicant: Adobe Inc.
IPC: H04N5/232 , G06K9/00 , G06N3/04 , G06K9/62 , G06K9/46 , G06T5/00 , H04N5/235 , G06N3/08 , G06T15/50
Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
-
-
-
-
-
-
-
-
-