-
公开(公告)号:US11887241B2
公开(公告)日:2024-01-30
申请号:US17559867
申请日:2021-12-22
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Yannick Hold-Geoffroy , Milos Hasan , Kalyan Sunkavalli , Fanbo Xiang
Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
-
公开(公告)号:US20230098115A1
公开(公告)日:2023-03-30
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
公开(公告)号:US11189060B2
公开(公告)日:2021-11-30
申请号:US16863540
申请日:2020-04-30
Applicant: Adobe Inc.
Inventor: Milos Hasan , Liang Shi , Tamy Boubekeur , Kalyan Sunkavalli , Radomir Mech
Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
-
公开(公告)号:US10810469B2
公开(公告)日:2020-10-20
申请号:US15975329
申请日:2018-05-09
Applicant: Adobe Inc. , The Regents of the University of California
Inventor: Kalyan Sunkavalli , Zhengqin Li , Manmohan Chandraker
Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
-
公开(公告)号:US10665011B1
公开(公告)日:2020-05-26
申请号:US16428482
申请日:2019-05-31
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Jean-Francois Lalonde , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.
-
公开(公告)号:US12165284B2
公开(公告)日:2024-12-10
申请号:US17655663
申请日:2022-03-21
Applicant: Adobe Inc.
Inventor: He Zhang , Jianming Zhang , Jose Ignacio Echevarria Vallespi , Kalyan Sunkavalli , Meredith Payne Stotzner , Yinglan Ma , Zhe Lin , Elya Shechtman , Frederick Mandia
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image. Utilizing a decoder, the transformer-based harmonization system combines the local information and the global information from the corresponding convolutional branch and transformer branch to generate a harmonized composite image.
-
17.
公开(公告)号:US20240062495A1
公开(公告)日:2024-02-22
申请号:US17892097
申请日:2022-08-21
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zexiang Xu , Shahrukh Athar , Kalyan Sunkavalli , Elya Shechtman
CPC classification number: G06T19/20 , G06T17/00 , G06T2200/08 , G06T2219/2021
Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.
-
公开(公告)号:US11042990B2
公开(公告)日:2021-06-22
申请号:US16176917
申请日:2018-10-31
Applicant: Adobe Inc.
Inventor: I-Ming Pao , Sarah Aye Kong , Alan Lee Erickson , Kalyan Sunkavalli , Hyunghwan Byun
Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
-
公开(公告)号:US20200273237A1
公开(公告)日:2020-08-27
申请号:US15930925
申请日:2020-05-13
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Zexiang Xu , Sunil Hadap
Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
-
公开(公告)号:US10692265B2
公开(公告)日:2020-06-23
申请号:US16676733
申请日:2019-11-07
Applicant: Adobe Inc.
Inventor: Sunil Hadap , Elya Shechtman , Zhixin Shu , Kalyan Sunkavalli , Mehmet Yumer
Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
-
-
-
-
-
-
-
-
-