KERNEL PREDICTION WITH KERNEL DICTIONARY IN IMAGE DENOISING

    公开(公告)号:US20220156588A1

    公开(公告)日:2022-05-19

    申请号:US17590995

    申请日:2022-02-02

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.

    Kernel prediction with kernel dictionary in image denoising

    公开(公告)号:US11281970B2

    公开(公告)日:2022-03-22

    申请号:US16686978

    申请日:2019-11-18

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.

    Estimating lighting parameters for positions within augmented-reality scenes

    公开(公告)号:US11158117B2

    公开(公告)日:2021-10-26

    申请号:US16877227

    申请日:2020-05-18

    Applicant: ADOBE INC.

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.

    Dynamically estimating lighting parameters for positions within augmented-reality scenes using a neural network

    公开(公告)号:US10692277B1

    公开(公告)日:2020-06-23

    申请号:US16360901

    申请日:2019-03-21

    Applicant: Adobe Inc.

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.

    CONTROLLABLE DYNAMIC APPEARANCE FOR NEURAL 3D PORTRAITS

    公开(公告)号:US20240338915A1

    公开(公告)日:2024-10-10

    申请号:US18132272

    申请日:2023-04-07

    Applicant: Adobe Inc.

    Abstract: Certain aspects and features of this disclosure relate to providing a controllable, dynamic appearance for neural 3D portraits. For example, a method involves projecting a color at points in a digital video portrait based on location, surface normal, and viewing direction for each respective point in a canonical space. The method also involves projecting, using the color, dynamic face normals for the points as changing according to an articulated head pose and facial expression in the digital video portrait. The method further involves disentangling, based on the dynamic face normals, a facial appearance in the digital video portrait into intrinsic components in the canonical space. The method additionally involves storing and/or rendering at least a portion of a head pose as a controllable, neural 3D portrait based on the digital video portrait using the intrinsic components.

Patent Agency Ranking