MODIFYING NEURAL NETWORKS FOR SYNTHETIC CONDITIONAL DIGITAL CONTENT GENERATION UTILIZING CONTRASTIVE PERCEPTUAL LOSS

    公开(公告)号:US20220148242A1

    公开(公告)日:2022-05-12

    申请号:US17091440

    申请日:2020-11-06

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that utilize a contrastive perceptual loss to modify neural networks for generating synthetic digital content items. For example, the disclosed systems generate a synthetic digital content item based on a guide input to a generative neural network. The disclosed systems utilize an encoder neural network to generate encoded representations of the synthetic digital content item and a corresponding ground-truth digital content item. Additionally, the disclosed systems sample patches from the encoded representations of the encoded digital content items and then determine a contrastive loss based on the perceptual distances between the patches in the encoded representations. Furthermore, the disclosed systems jointly update the parameters of the generative neural network and the encoder neural network utilizing the contrastive loss.

    Reconstructing three-dimensional scenes in a target coordinate system from multiple views

    公开(公告)号:US11257298B2

    公开(公告)日:2022-02-22

    申请号:US16822819

    申请日:2020-03-18

    Applicant: Adobe Inc.

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for reconstructing three-dimensional meshes from two-dimensional images of objects with automatic coordinate system alignment. For example, the disclosed system can generate feature vectors for a plurality of images having different views of an object. The disclosed system can process the feature vectors to generate coordinate-aligned feature vectors aligned with a coordinate system associated with an image. The disclosed system can generate a combined feature vector from the feature vectors aligned to the coordinate system. Additionally, the disclosed system can then generate a three-dimensional mesh representing the object from the combined feature vector.

    3D object reconstruction using photometric mesh representation

    公开(公告)号:US11189094B2

    公开(公告)日:2021-11-30

    申请号:US16985402

    申请日:2020-08-05

    Applicant: Adobe, Inc.

    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.

    TRANSCRIPT-BASED INSERTION OF SECONDARY VIDEO CONTENT INTO PRIMARY VIDEO CONTENT

    公开(公告)号:US20210304799A1

    公开(公告)日:2021-09-30

    申请号:US17345081

    申请日:2021-06-11

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve transcript-based techniques for facilitating insertion of secondary video content into primary video content. For instance, a video editor presents a video editing interface having a primary video section displaying a primary video, a text-based navigation section having navigable portions of a primary video transcript, and a secondary video menu section displaying candidate secondary videos. In some embodiments, candidate secondary videos are obtained by using target terms detected in the transcript to query a remote data source for the candidate secondary videos. In embodiments involving video insertion, the video editor identifies a portion of the primary video corresponding to a portion of the transcript selected within the text-based navigation section. The video editor inserts a secondary video, which is selected from the candidate secondary videos based on an input received at the secondary video menu section, at the identified portion of the primary video.

    GENERATING TAGS FOR A DIGITAL VIDEO
    15.
    发明申请

    公开(公告)号:US20200336802A1

    公开(公告)日:2020-10-22

    申请号:US16386031

    申请日:2019-04-16

    Applicant: Adobe Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.

    MOTION MODEL REFINEMENT BASED ON CONTACT ANALYSIS AND OPTIMIZATION

    公开(公告)号:US20220139019A1

    公开(公告)日:2022-05-05

    申请号:US17573890

    申请日:2022-01-12

    Applicant: Adobe Inc.

    Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.

    Systems and methods of learning visual importance for graphic design and data visualization

    公开(公告)号:US11189066B1

    公开(公告)日:2021-11-30

    申请号:US16188626

    申请日:2018-11-13

    Applicant: ADOBE INC.

    Abstract: Embodiments disclosed herein describe systems, methods, and products that train one or more neural networks and execute the trained neural network across various applications. The one or more neural networks are trained to optimize a loss function comprising a pixel-level comparison between the outputs generated by the neural networks and the ground truth dataset generated from a bubble view methodology or an explicit importance maps methodology. Each of these methodologies may be more efficient than and may closely approximate the more expensive but accurate human eye gaze measurements. The embodiments herein leverage an existing process for training neural networks to generate importance maps of a plurality of graphic objects to offer interactive applications for graphics designs and data visualizations. Based on the importance maps, the computer may provide real-time design feedback, generate smart thumbnails of the graphic objects, provide recommendations for design retargeting, and extract smart color themes from the graphic objects.

    REPRESENTATION LEARNING FROM VIDEO WITH SPATIAL AUDIO

    公开(公告)号:US20210350135A1

    公开(公告)日:2021-11-11

    申请号:US16868805

    申请日:2020-05-07

    Applicant: Adobe Inc.

    Abstract: A computer system is trained to understand audio-visual spatial correspondence using audio-visual clips having multi-channel audio. The computer system includes an audio subnetwork, video subnetwork, and pretext subnetwork. The audio subnetwork receives the two channels of audio from the audio-visual clips, and the video subnetwork receives the video frames from the audio-visual clips. In a subset of the audio-visual clips the audio-visual spatial relationship is misaligned, causing the audio-visual spatial cues for the audio and video to be incorrect. The audio subnetwork outputs an audio feature vector for each audio-visual clip, and the video subnetwork outputs a video feature vector for each audio-visual clip. The audio and video feature vectors for each audio-visual clip are merged and provided to the pretext subnetwork, which is configured to classify the merged vector as either having a misaligned audio-visual spatial relationship or not. The subnetworks are trained based on the loss calculated from the classification.

    Generating tags for a digital video

    公开(公告)号:US11146862B2

    公开(公告)日:2021-10-12

    申请号:US16386031

    申请日:2019-04-16

    Applicant: Adobe Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.

    Transcript-based insertion of secondary video content into primary video content

    公开(公告)号:US11049525B2

    公开(公告)日:2021-06-29

    申请号:US16281903

    申请日:2019-02-21

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve transcript-based techniques for facilitating insertion of secondary video content into primary video content. For instance, a video editor presents a video editing interface having a primary video section displaying a primary video, a text-based navigation section having navigable portions of a primary video transcript, and a secondary video menu section displaying candidate secondary videos. In some embodiments, candidate secondary videos are obtained by using target terms detected in the transcript to query a remote data source for the candidate secondary videos. In embodiments involving video insertion, the video editor identifies a portion of the primary video corresponding to a portion of the transcript selected within the text-based navigation section. The video editor inserts a secondary video, which is selected from the candidate secondary videos based on an input received at the secondary video menu section, at the identified portion of the primary video.

Patent Agency Ranking