IMAGE EDITING BY A GENERATIVE ADVERSARIAL NETWORK USING KEYPOINTS OR SEGMENTATION MASKS CONSTRAINTS

    公开(公告)号:US20210264207A1

    公开(公告)日:2021-08-26

    申请号:US16802243

    申请日:2020-02-26

    Applicant: ADOBE INC.

    Abstract: Images can be edited to include features similar to a different target image. An unconditional generative adversarial network (GAN) is employed to edit features of an initial image based on a constraint determined from a target image. The constraint used by the GAN is determined from keypoints or segmentation masks of the target image, and edits are made to features of the initial image based on keypoints or segmentation masks of the initial image corresponding to those of the constraint from the target image. The GAN modifies the initial image based on a loss function having a variable for the constraint. The result of this optimization process is a modified initial image having features similar to the target image subject to the constraint determined from the identified keypoints or segmentation masks.

    NEURAL NETWORK-BASED CAMERA CALIBRATION
    22.
    发明申请

    公开(公告)号:US20200074682A1

    公开(公告)日:2020-03-05

    申请号:US16675641

    申请日:2019-11-06

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.

    Neural network-based camera calibration

    公开(公告)号:US10515460B2

    公开(公告)日:2019-12-24

    申请号:US15826331

    申请日:2017-11-29

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.

    GENERATING SHADOWS FOR PLACED OBJECTS IN DEPTH ESTIMATED SCENES OF TWO-DIMENSIONAL IMAGES

    公开(公告)号:US20240135612A1

    公开(公告)日:2024-04-25

    申请号:US18304113

    申请日:2023-04-20

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06T7/194 G06T7/50 G06T7/68 G06T15/60

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

    GENERATING SCALE FIELDS INDICATING PIXEL-TO-METRIC DISTANCES RELATIONSHIPS IN DIGITAL IMAGES VIA NEURAL NETWORKS

    公开(公告)号:US20240127509A1

    公开(公告)日:2024-04-18

    申请号:US18304134

    申请日:2023-04-20

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06T3/4046

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

    LARGE-SCALE OUTDOOR AUGMENTED REALITY SCENES USING CAMERA POSE BASED ON LEARNED DESCRIPTORS

    公开(公告)号:US20220114365A1

    公开(公告)日:2022-04-14

    申请号:US17068429

    申请日:2020-10-12

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.

Patent Agency Ranking