-
公开(公告)号:US20250078387A1
公开(公告)日:2025-03-06
申请号:US18242764
申请日:2023-09-06
Applicant: Adobe Inc.
Inventor: Valentin Deschaintre , Yannick Hold-Geoffroy , Paul Guerrero
IPC: G06T15/04 , G06F16/535 , G06F16/583
Abstract: A material search computing system generates a joint feature comparison space by combining joint image-text features of surface material data objects. The joint feature comparison space is a consistent comparison space. The material search computing system extracts a query joint feature set from a query data object that includes text data or image data. In addition, the material search computing system compares the query joint feature set to the joint image-text features included in the joint feature comparison space. Based on the comparison, the material search computing system identifies a result joint feature set and associated result surface material data objects. The material search computing system generates material query result data describing the result surface material data objects, and provides the material query result data to an additional computing system.
-
公开(公告)号:US11887241B2
公开(公告)日:2024-01-30
申请号:US17559867
申请日:2021-12-22
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Yannick Hold-Geoffroy , Milos Hasan , Kalyan Sunkavalli , Fanbo Xiang
Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
-
公开(公告)号:US20230098115A1
公开(公告)日:2023-03-30
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
4.
公开(公告)号:US11157773B2
公开(公告)日:2021-10-26
申请号:US16802243
申请日:2020-02-26
Applicant: ADOBE INC.
Inventor: Cameron Smith , Yannick Hold-Geoffroy , Mariia Drozdova
Abstract: Images can be edited to include features similar to a different target image. An unconditional generative adversarial network (GAN) is employed to edit features of an initial image based on a constraint determined from a target image. The constraint used by the GAN is determined from keypoints or segmentation masks of the target image, and edits are made to features of the initial image based on keypoints or segmentation masks of the initial image corresponding to those of the constraint from the target image. The GAN modifies the initial image based on a loss function having a variable for the constraint. The result of this optimization process is a modified initial image having features similar to the target image subject to the constraint determined from the identified keypoints or segmentation masks.
-
公开(公告)号:US20210319532A1
公开(公告)日:2021-10-14
申请号:US16848741
申请日:2020-04-14
Applicant: Adobe Inc.
Inventor: Julia Gong , Yannick Hold-Geoffroy , Jingwan Lu
Abstract: Techniques and systems are provided for configuring neural networks to perform warping of an object represented in an image to create a caricature of the object. For instance, in response to obtaining an image of an object, a warped image generator generates a warping field using the image as input. The warping field is generated using a model trained with pairings of training images and known warped images using supervised learning techniques and one or more losses. The warped image generator determines, based on the warping field, a set of displacements associated with pixels of the input image. These displacements indicate pixel displacement directions for the pixels of the input image. These displacements are applied to the digital image to generate a warped image of the object.
-
公开(公告)号:US10979640B2
公开(公告)日:2021-04-13
申请号:US16789195
申请日:2020-02-12
Applicant: Adobe Inc.
IPC: G06T5/00 , H04N5/232 , G06K9/46 , G06T15/50 , G06N3/08 , H04N5/235 , G06K9/00 , G06K9/62 , G06N3/04
Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
-
公开(公告)号:US10957026B1
公开(公告)日:2021-03-23
申请号:US16564398
申请日:2019-09-09
Applicant: ADOBE INC.
Inventor: Jinsong Zhang , Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Jonathan Eisenmann , Jean-Francois Lalonde
Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
-
公开(公告)号:US20240143835A1
公开(公告)日:2024-05-02
申请号:US18052121
申请日:2022-11-02
Applicant: Adobe Inc.
Inventor: Siavash Khodadadeh , Ratheesh Kalarot , Shabnam Ghadar , Yannick Hold-Geoffroy
IPC: G06F21/62 , G06N3/0455 , G06N3/0475
CPC classification number: G06F21/6254 , G06N3/0455 , G06N3/0475
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating anonymized digital images utilizing a face anonymization neural network. In some embodiments, the disclosed systems utilize a face anonymization neural network to extract or encode a face anonymization guide that encodes face attribute features, such as gender, ethnicity, age, and expression. In some cases, the disclosed systems utilize the face anonymization guide to inform the face anonymization neural network in generating synthetic face pixels for anonymizing a digital image while retaining attributes, such as gender, ethnicity, age, and expression. The disclosed systems learn parameters for a face anonymization neural network for preserving face attributes, accounting for multiple faces in digital images, and generating synthetic face pixels for faces in profile poses.
-
公开(公告)号:US11663467B2
公开(公告)日:2023-05-30
申请号:US16691110
申请日:2019-11-21
Applicant: ADOBE INC.
Inventor: Long Mai , Yannick Hold-Geoffroy , Naoto Inoue , Daichi Ito , Brian Lynn Price
CPC classification number: G06N3/08 , G06T5/50 , G06T15/506 , G06T15/80 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
-
公开(公告)号:US11488342B1
公开(公告)日:2022-11-01
申请号:US17332708
申请日:2021-05-27
Applicant: ADOBE INC.
Inventor: Kalyan Krishna Sunkavalli , Yannick Hold-Geoffroy , Milos Hasan , Zexiang Xu , Yu-Ying Yeh , Stefano Corazza
Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).
-
-
-
-
-
-
-
-
-