-
公开(公告)号:US20230098115A1
公开(公告)日:2023-03-30
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
公开(公告)号:US20200074600A1
公开(公告)日:2020-03-05
申请号:US16678072
申请日:2019-11-08
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
公开(公告)号:US11443412B2
公开(公告)日:2022-09-13
申请号:US16678072
申请日:2019-11-08
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
4.
公开(公告)号:US20210065440A1
公开(公告)日:2021-03-04
申请号:US16558975
申请日:2019-09-03
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
公开(公告)号:US10475169B2
公开(公告)日:2019-11-12
申请号:US15824943
申请日:2017-11-28
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
公开(公告)号:US12008710B2
公开(公告)日:2024-06-11
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
CPC classification number: G06T15/506 , G06N3/08 , G06T7/50 , G06T7/60 , G06T7/70 , G06T7/90 , G06T2200/24 , G06T2207/20081 , G06T2207/20084
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
7.
公开(公告)号:US11538216B2
公开(公告)日:2022-12-27
申请号:US16558975
申请日:2019-09-03
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
公开(公告)号:US20190164261A1
公开(公告)日:2019-05-30
申请号:US15824943
申请日:2017-11-28
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
CPC classification number: G06T5/007 , G06N3/0454 , G06N3/082 , G06T1/0007 , G06T1/20 , G06T7/90 , G06T9/002 , G06T2207/10024 , G06T2207/10152 , G06T2215/12
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
-
-
-
-
-
-