NEURAL PHOTOFINISHER DIGITAL CONTENT STYLIZATION

    公开(公告)号:US20240202989A1

    公开(公告)日:2024-06-20

    申请号:US18067878

    申请日:2022-12-19

    Applicant: Adobe Inc.

    Abstract: Digital content stylization techniques are described that leverage a neural photofinisher to generate stylized digital images. In one example, the neural photofinisher is implemented as part of a stylization system to train a neural network to perform digital image style transfer operations using reference digital content as training data. The training includes calculating a style loss term that identifies a particular visual style of the reference digital content. Once trained, the stylization system receives a digital image and generates a feature map of a scene depicted by the digital image. Based on the feature map as well as the style loss, the stylization system determines visual parameter values to apply to the digital image to incorporate a visual appearance of the particular visual style. The stylization system generates the stylized digital image by applying the visual parameter values to the digital image automatically and without user intervention.

    UTILIZING MACHINE LEARNING MODELS TO GENERATE REFINED DEPTH MAPS WITH SEGMENTATION MASK GUIDANCE

    公开(公告)号:US20230326028A1

    公开(公告)日:2023-10-12

    申请号:US17658873

    申请日:2022-04-12

    Applicant: Adobe Inc.

    CPC classification number: G06T7/11 G06T2207/20084 G06T7/50 G06T7/215

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate refined depth maps of digital images utilizing digital segmentation masks. In particular, in one or more embodiments, the disclosed systems generate a depth map for a digital image utilizing a depth estimation machine learning model, determine a digital segmentation mask for the digital image, and generate a refined depth map from the depth map and the digital segmentation mask utilizing a depth refinement machine learning model. In some embodiments, the disclosed systems generate first and second intermediate depth maps using the digital segmentation mask and an inverse digital segmentation mask and merger the first and second intermediate depth maps to generate the refined depth map.

    GENERATING IMAGE DIFFERENCE CAPTIONS VIA AN IMAGE-TEXT CROSS-MODAL NEURAL NETWORK

    公开(公告)号:US20250131753A1

    公开(公告)日:2025-04-24

    申请号:US18489681

    申请日:2023-10-18

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating difference captions indicating detected differences in digital image pairs. The disclosed system generates a first feature map of a first digital image and a second feature map of a second digital image. The disclosed system converts, utilizing a linear projection neural network, the first feature map to a first modified feature map in a feature space corresponding to a large language machine-learning model. The disclosed system also converts, utilizing the linear projection neural network layer, the second feature map to a second modified feature map in the feature space corresponding to the large language machine-learning model. The disclosed system further generates, utilizing the large language machine-learning model, a difference caption indicating a difference between the first digital image and the second digital image from a combination of the first modified feature map and the second modified feature map.

    GENERATING COLOR-EDITED DIGITAL IMAGES UTILIZING A CONTENT AWARE DIFFUSION NEURAL NETWORK

    公开(公告)号:US20250046055A1

    公开(公告)日:2025-02-06

    申请号:US18363980

    申请日:2023-08-02

    Applicant: Adobe Inc.

    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that trains (and utilizes) an image color editing diffusion neural network to generate a color edited digital image(s) for a digital image. In particular, in one or more implementations, the disclosed systems identify a digital image depicting content in a first color style. Moreover, the disclosed systems generate, from the digital image utilizing an image color editing diffusion neural network, a color-edited digital image depicting the content in a second color style different from the first color style. Further, the disclosed systems provide, for display within a graphical user interface, the color-edited digital image.

Patent Agency Ranking