-
公开(公告)号:US11776184B2
公开(公告)日:2023-10-03
申请号:US17204638
申请日:2021-03-17
Applicant: ADOBE INC.
Inventor: Jianming Zhang , Alan Erickson , I-Ming Pao , Guotong Feng , Kalyan Sunkavalli , Frederick Mandia , Hyunghwan Byun , Betty Leong , Meredith Payne Stotzner , Yukie Takahashi , Quynn Megan Le , Sarah Kong
IPC: G06T11/60 , G06F3/04842 , G06F3/04845 , G06T11/00
CPC classification number: G06T11/60 , G06F3/04842 , G06F3/04845 , G06T11/001
Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.
-
公开(公告)号:US11443412B2
公开(公告)日:2022-09-13
申请号:US16678072
申请日:2019-11-08
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
公开(公告)号:US20220156588A1
公开(公告)日:2022-05-19
申请号:US17590995
申请日:2022-02-02
Applicant: Adobe Inc.
Inventor: Federico Perazzi , Zhihao Xia , Michael Gharbi , Kalyan Sunkavalli
Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
-
公开(公告)号:US11281970B2
公开(公告)日:2022-03-22
申请号:US16686978
申请日:2019-11-18
Applicant: Adobe Inc.
Inventor: Federico Perazzi , Zhihao Xia , Michael Gharbi , Kalyan Sunkavalli
Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
-
公开(公告)号:US11158117B2
公开(公告)日:2021-10-26
申请号:US16877227
申请日:2020-05-18
Applicant: ADOBE INC.
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.
-
6.
公开(公告)号:US20210065440A1
公开(公告)日:2021-03-04
申请号:US16558975
申请日:2019-09-03
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
公开(公告)号:US10692277B1
公开(公告)日:2020-06-23
申请号:US16360901
申请日:2019-03-21
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.
-
公开(公告)号:US20240338915A1
公开(公告)日:2024-10-10
申请号:US18132272
申请日:2023-04-07
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zexiang Xu , Shahrukh Athar , Sai Bi , Kalyan Sunkavalli , Fujun Luan
CPC classification number: G06T19/20 , G06N3/08 , G06T15/80 , G06T17/20 , G06T2210/44 , G06T2219/2012 , G06T2219/2021
Abstract: Certain aspects and features of this disclosure relate to providing a controllable, dynamic appearance for neural 3D portraits. For example, a method involves projecting a color at points in a digital video portrait based on location, surface normal, and viewing direction for each respective point in a canonical space. The method also involves projecting, using the color, dynamic face normals for the points as changing according to an articulated head pose and facial expression in the digital video portrait. The method further involves disentangling, based on the dynamic face normals, a facial appearance in the digital video portrait into intrinsic components in the canonical space. The method additionally involves storing and/or rendering at least a portion of a head pose as a controllable, neural 3D portrait based on the digital video portrait using the intrinsic components.
-
9.
公开(公告)号:US20240185393A1
公开(公告)日:2024-06-06
申请号:US18440248
申请日:2024-02-13
Applicant: Adobe Inc.
Inventor: He Zhang , Yifan Jiang , Yilin Wang , Jianming Zhang , Kalyan Sunkavalli , Sarah Kong , Su Chen , Sohrab Amirghodsi , Zhe Lin
CPC classification number: G06T5/50 , G06N3/04 , G06N3/08 , G06T7/194 , G06T11/001 , G06T11/60 , G06T2207/20081 , G06T2207/20084 , G06T2207/20092 , G06T2207/20132 , G06T2207/20212
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”). Additionally, the disclosed systems can utilize the self-supervised image harmonization neural network to generate harmonized digital images that depict content from one digital image having the appearance of another digital image.
-
公开(公告)号:US11935217B2
公开(公告)日:2024-03-19
申请号:US17200338
申请日:2021-03-12
Applicant: Adobe Inc.
Inventor: He Zhang , Yifan Jiang , Yilin Wang , Jianming Zhang , Kalyan Sunkavalli , Sarah Kong , Su Chen , Sohrab Amirghodsi , Zhe Lin
CPC classification number: G06T5/50 , G06N3/04 , G06N3/08 , G06T7/194 , G06T11/001 , G06T11/60 , G06T2207/20081 , G06T2207/20084 , G06T2207/20092 , G06T2207/20132 , G06T2207/20212
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”). Additionally, the disclosed systems can utilize the self-supervised image harmonization neural network to generate harmonized digital images that depict content from one digital image having the appearance of another digital image.
-
-
-
-
-
-
-
-
-