-
公开(公告)号:US12148074B2
公开(公告)日:2024-11-19
申请号:US17503671
申请日:2021-10-18
Applicant: Adobe Inc.
Inventor: He Zhang , Jeya Maria Jose Valanarasu , Jianming Zhang , Jose Ignacio Echevarria Vallespi , Kalyan Sunkavalli , Yilin Wang , Yinglan Ma , Zhe Lin , Zijun Wei
IPC: G06T11/60 , G06F3/04842 , G06F3/04845 , G06N3/08 , G06V10/40 , G06V10/75
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating harmonized digital images utilizing an object-to-object harmonization neural network. For example, the disclosed systems implement, and learn parameters for, an object-to-object harmonization neural network to combine a style code from a reference object with features extracted from a target object. Indeed, the disclosed systems extract a style code from a reference object utilizing a style encoder neural network. In addition, the disclosed systems generate a harmonized target object by applying the style code of the reference object to a target object utilizing an object-to-object harmonization neural network.
-
公开(公告)号:US12073507B2
公开(公告)日:2024-08-27
申请号:US17861199
申请日:2022-07-09
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Zhixin Shu , Sai Bi , Qiangeng Xu , Kalyan Sunkavalli , Julien Philip
CPC classification number: G06T15/205 , G06T15/06 , G06T15/80 , G06T2207/10028
Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
-
公开(公告)号:US12008710B2
公开(公告)日:2024-06-11
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
CPC classification number: G06T15/506 , G06N3/08 , G06T7/50 , G06T7/60 , G06T7/70 , G06T7/90 , G06T2200/24 , G06T2207/20081 , G06T2207/20084
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
44.
公开(公告)号:US11972534B2
公开(公告)日:2024-04-30
申请号:US17519841
申请日:2021-11-05
Applicant: Adobe Inc.
IPC: G06T19/20 , G06F18/211 , G06F18/22 , G06N3/02 , G06T15/04
CPC classification number: G06T19/20 , G06F18/211 , G06F18/22 , G06N3/02 , G06T15/04 , G06T2219/2004 , G06T2219/2016
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
-
公开(公告)号:US20230360285A1
公开(公告)日:2023-11-09
申请号:US18341618
申请日:2023-06-26
Applicant: Adobe Inc.
Inventor: Milos Hasan , Liang Shi , Tamy Boubekeur , Kalyan Sunkavalli , Radomir Mech
CPC classification number: G06T11/001 , G06T15/04 , G06N3/084 , G06T11/40
Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
-
46.
公开(公告)号:US20230141395A1
公开(公告)日:2023-05-11
申请号:US17519841
申请日:2021-11-05
Applicant: Adobe Inc.
CPC classification number: G06T19/20 , G06K9/6215 , G06K9/6228 , G06N3/02 , G06T15/04 , G06T2219/2004 , G06T2219/2016
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
-
公开(公告)号:US20230122623A1
公开(公告)日:2023-04-20
申请号:US17503671
申请日:2021-10-18
Applicant: Adobe Inc.
Inventor: He Zhang , Jeya Maria Jose Valanarasu , Jianming Zhang , Jose Ignacio Echevarria Vallespi , Kalyan Sunkavalli , Yilin Wang , Yinglan Ma , Zhe Lin , Zijun Wei
IPC: G06T11/60 , G06F3/0484 , G06K9/46 , G06K9/62 , G06N3/08
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating harmonized digital images utilizing an object-to-object harmonization neural network. For example, the disclosed systems implement, and learn parameters for, an object-to-object harmonization neural network to combine a style code from a reference object with features extracted from a target object. Indeed, the disclosed systems extract a style code from a reference object utilizing a style encoder neural network. In addition, the disclosed systems generate a harmonized target object by applying the style code of the reference object to a target object utilizing an object-to-object harmonization neural network.
-
公开(公告)号:US20230037282A9
公开(公告)日:2023-02-02
申请号:US17232890
申请日:2021-04-16
Applicant: ADOBE INC
Inventor: Alan Erickson , Kalyan Sunkavalli , I-Ming Pao , Guotong Feng , Jianming Zhang , Frederick Mandia
Abstract: Systems and methods for image editing are described. Embodiments of the present disclosure provide an image editing system for performing image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. According to some embodiments described herein, real-time color harmonization based on the visible sky region may be used to produce more natural colorization. In some examples, horizon-aware sky alignment and placement with advanced padding may also be used. For example, the horizons of the original image and the replacement image may be automatically detected and aligned, and color harmonization may be performed based on the aligned images.
-
公开(公告)号:US20230005197A9
公开(公告)日:2023-01-05
申请号:US17204638
申请日:2021-03-17
Applicant: ADOBE INC.
Inventor: JIANMING ZHANG , Alan Erickson , I-Ming Pao , Guotong Feng , Kalyan Sunkavalli , Frederick Mandia , Hyunghwan Byun , Betty Leong , Meredith Payne Stotzner , Yukie Takahashi , Quynn Megan Le , Sarah Kong
IPC: G06T11/60 , G06T11/00 , G06F3/0484
Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.
-
50.
公开(公告)号:US11538216B2
公开(公告)日:2022-12-27
申请号:US16558975
申请日:2019-09-03
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
-
-
-
-
-
-
-
-