System and methods for modeling creation workflows

    公开(公告)号:US11354792B2

    公开(公告)日:2022-06-07

    申请号:US16785437

    申请日:2020-02-07

    Applicant: ADOBE INC.

    Abstract: Technologies for image processing based on a creation workflow for creating a type of images are provided. Both multi-stage image generation as well as multi-stage image editing of an existing image are supported. To accomplish this, one system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, this technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use these technologies to efficiently perform complex artwork creation or editing tasks.

    SYSTEM AND METHODS FOR MODELING CREATION WORKFLOWS

    公开(公告)号:US20210248727A1

    公开(公告)日:2021-08-12

    申请号:US16785437

    申请日:2020-02-07

    Applicant: ADOBE INC.

    Abstract: This disclosure includes technologies for image processing based on a creation workflow for creating a type of images. The disclosed technologies can support both multi-stage image generation as well as multi-stage image editing of an existing image. To accomplish this, the disclosed system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, the disclosed technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use the disclosed technologies to efficiently perform complex artwork creation or editing tasks.

    GENERATING STYLIZED-STROKE IMAGES FROM SOURCE IMAGES UTILIZING STYLE-TRANSFER-NEURAL NETWORKS WITH NON-PHOTOREALISTIC-RENDERING

    公开(公告)号:US20200151938A1

    公开(公告)日:2020-05-14

    申请号:US16184289

    申请日:2018-11-08

    Applicant: Adobe Inc.

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style. In some implementations, the disclosed methods, non-transitory computer readable media, and systems can either train or apply a style-transfer-neural network that captures a variety of stroke styles, such as different edge-stroke styles or shading-stroke styles.

    Multi-style texture synthesis
    54.
    发明授权

    公开(公告)号:US10192321B2

    公开(公告)日:2019-01-29

    申请号:US15409321

    申请日:2017-01-18

    Applicant: ADOBE INC.

    Abstract: Systems and techniques that synthesize an image with similar texture to a selected style image. A generator network is trained to synthesize texture images depending on a selection unit input. The training configures the generator network to synthesize texture images that are similar to individual style images of multiple style images based on which is selected by the selection unit input. The generator network can be configured to minimize a covariance matrix-based style loss and/or a diversity loss in synthesizing the texture images. After training the generator network, the generator network is used to synthesize texture images for selected style images. For example, this can involve receiving user input selecting a selected style image, determining the selection unit input based on the selected style image, and synthesizing texture images using the generator network with the selection unit input and noise input.

Patent Agency Ranking