Skin tone assisted digital image color matching

    公开(公告)号:US10936853B1

    公开(公告)日:2021-03-02

    申请号:US16593872

    申请日:2019-10-04

    Applicant: Adobe Inc.

    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.

    3D object reconstruction using photometric mesh representation

    公开(公告)号:US10769848B1

    公开(公告)日:2020-09-08

    申请号:US16421729

    申请日:2019-05-24

    Applicant: Adobe, Inc.

    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.

    Joint Training Technique for Depth Map Generation

    公开(公告)号:US20200175700A1

    公开(公告)日:2020-06-04

    申请号:US16204785

    申请日:2018-11-29

    Applicant: Adobe Inc.

    Abstract: Joint training technique for depth map generation implemented by depth prediction system as part of a computing device is described. The depth prediction system is configured to generate a candidate feature map from features extracted from training digital images, generate a candidate segmentation map and a candidate depth map from the generated candidate feature map, and jointly train portions of the depth prediction system using a loss function. Consequently, depth prediction system is able to generate a depth map that identifies depths of objects using ordinal depth information and accurately delineates object boundaries within a single digital image.

    LEARNING A 3D SCENE GENERATION MODEL FROM IMAGES OF A SELF-SIMILAR SCENE

    公开(公告)号:US20250061647A1

    公开(公告)日:2025-02-20

    申请号:US18233458

    申请日:2023-08-14

    Abstract: A scene modeling system accesses a set of input two-dimensional (2D) images of a three-dimensional (3D) environment, wherein the input 2D images captured from a plurality of camera orientations. The environment includes first content. The scene modeling system applies a scene generation model to the set of input 2D images to generate a 3D remix scene. Applying the scene generation model includes configuring the scene generation model using at least a 2D discriminator and a 3D discriminator. Applying the scene generation model includes transmitting, for display via a user interface, the 3D remix scene. The 3D remix scene includes second content that is different from the first content.

    Skin tone assisted digital image color matching

    公开(公告)号:US11610433B2

    公开(公告)日:2023-03-21

    申请号:US17154830

    申请日:2021-01-21

    Applicant: Adobe Inc.

    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.

    Intelligent video reframing
    18.
    发明授权

    公开(公告)号:US11490048B2

    公开(公告)日:2022-11-01

    申请号:US17217951

    申请日:2021-03-30

    Applicant: Adobe Inc.

    Abstract: Embodiments of the present invention are directed towards reframing videos from one aspect ratio to another aspect ratio while maintaining visibility of regions of interest. A set of regions of interest are determined in frames in a video with a first aspect ratio. The set of regions of interest can be used to estimate an initial camera path. An optimal camera path is determined by leveraging the identified regions of interest using the initial camera path. Sub crops with a second aspect ratio different from the first aspect ratio of the video are identified. The sub crops are placed as designated using the optimal camera path to generate a cropped video with the second aspect ratio.

    VIEW SYNTHESIS OF A DYNAMIC SCENE
    19.
    发明申请

    公开(公告)号:US20220301252A1

    公开(公告)日:2022-09-22

    申请号:US17204571

    申请日:2021-03-17

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, provide a view and time synthesis of dynamic scenes captured by a camera. The technology described herein represents a dynamic scene as a continuous function of both space and time. The technology may parameterize this function with a deep neural network (a multi-layer perceptron (MLP)), and perform rendering using volume tracing. At a very high level, a dynamic scene depicted in the video may be used to train the MLP. Once trained, the MLP is able to synthesize a view of the scene at a time and/or camera pose not found in the video through prediction. As used herein, a dynamic scene comprises one or more moving objects.

    CORRECTIVE LIGHTING FOR VIDEO INPAINTING

    公开(公告)号:US20220292649A1

    公开(公告)日:2022-09-15

    申请号:US17196581

    申请日:2021-03-09

    Applicant: Adobe Inc.

    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.

Patent Agency Ranking