IMAGE-BLENDING VIA ALIGNMENT OR PHOTOMETRIC ADJUSTMENTS COMPUTED BY A NEURAL NETWORK

    公开(公告)号:US20190279346A1

    公开(公告)日:2019-09-12

    申请号:US15914659

    申请日:2018-03-07

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.

    Joint Trimap Estimation and Alpha Matte Prediction for Video Matting

    公开(公告)号:US20230360177A1

    公开(公告)日:2023-11-09

    申请号:US17736397

    申请日:2022-05-04

    Applicant: Adobe Inc.

    Abstract: In implementations of systems for joint trimap estimation and alpha matte prediction, a computing device implements a matting system to estimate a trimap for a frame of a digital video using a first stage of a machine learning model. An alpha matte is predicted for the frame based on the trimap and the frame using a second stage of the machine learning model. The matting system generates a refined trimap and a refined alpha matte for the frame based on the alpha matte, the trimap, and the frame using a third stage of the machine learning model. An additional trimap is estimated for an additional frame of the digital video based on the refined trimap and the refined alpha matte using the first stage of the machine learning model.

    Generalizable robot approach control techniques

    公开(公告)号:US11449079B2

    公开(公告)日:2022-09-20

    申请号:US16262448

    申请日:2019-01-30

    Applicant: Adobe Inc.

    Abstract: Systems and techniques are described that provide for generalizable approach policy learning and implementation for robotic object approaching. Described techniques provide fast and accurate approaching of a specified object, or type of object, in many different environments. The described techniques enable a robot to receive an identification of an object or type of object from a user, and then navigate to the desired object, without further control from the user. Moreover, the approach of the robot to the desired object is performed efficiently, e.g., with a minimum number of movements. Further, the approach techniques may be used even when the robot is placed in a new environment, such as when the same type of object must be approached in multiple settings.

    Generating tags for a digital video

    公开(公告)号:US11146862B2

    公开(公告)日:2021-10-12

    申请号:US16386031

    申请日:2019-04-16

    Applicant: Adobe Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.

    Material capture using imaging
    25.
    发明授权

    公开(公告)号:US10818022B2

    公开(公告)日:2020-10-27

    申请号:US16229759

    申请日:2018-12-21

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.

    VIDEO OBJECT SEGMENTATION BY REFERENCE-GUIDED MASK PROPAGATION

    公开(公告)号:US20200250436A1

    公开(公告)日:2020-08-06

    申请号:US16856292

    申请日:2020-04-23

    Applicant: Adobe Inc.

    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.

    Image-blending via alignment or photometric adjustments computed by a neural network

    公开(公告)号:US10600171B2

    公开(公告)日:2020-03-24

    申请号:US15914659

    申请日:2018-03-07

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.

    Generating a compact video feature representation in a digital medium environment

    公开(公告)号:US10430661B2

    公开(公告)日:2019-10-01

    申请号:US15384831

    申请日:2016-12-20

    Applicant: Adobe Inc.

    Abstract: Techniques and systems are described to generate a compact video feature representation for sequences of frames in a video. In one example, values of features are extracted from each frame of a plurality of frames of a video using machine learning, e.g., through use of a convolutional neural network. A video feature representation is generated of temporal order dynamics of the video, e.g., through use of a recurrent neural network. For example, a maximum value is maintained of each feature of the plurality of features that has been reached for the plurality of frames in the video. A timestamp is also maintained as indicative of when the maximum value is reached for each feature of the plurality of features. The video feature representation is then output as a basis to determine similarity of the video with at least one other video based on the video feature representation.

    VIDEO GENERATION USING FRAME-WISE TOKEN EMBEDDINGS

    公开(公告)号:US20250119624A1

    公开(公告)日:2025-04-10

    申请号:US18894443

    申请日:2024-09-24

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for generating synthetic videos includes obtaining an input prompt describing a video scene. The embodiments then generate a plurality of frame-wise token embeddings corresponding to a sequence of video frames, respectively, based on the input prompt. Subsequently, embodiments generate, using a video generation model, a synthesized video depicting the video scene. The synthesized includes a plurality of images corresponding to the sequence of video frames.

Patent Agency Ranking