ENHANCING MEDIA CONTENT EFFECTIVENESS USING FEEDBACK BETWEEN EVALUATION AND CONTENT EDITING

    公开(公告)号:US20210264446A1

    公开(公告)日:2021-08-26

    申请号:US16796169

    申请日:2020-02-20

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for improving media content effectiveness. A methodology implementing the techniques according to an embodiment includes generating an intermediate representation (IR) of provided media content, the IR specifying editable elements of the content and maintaining a result of cumulative edits to those elements. The method also includes editing the elements of the IR to generate a set of candidate IR variations. The method further includes creating a set of candidate media contents based on the candidate IR variations, evaluating the candidate media contents to generate effectiveness scores, and pruning the set of candidate IR variations to retain a threshold number of the candidate IR variations as surviving IR variations associated with the highest effectiveness scores. The process iterates until either an effectiveness score exceeds a threshold value, the incremental improvement at each iteration falls below a desired value, or a maximum number of iterations have been performed.

    Utilizing a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices

    公开(公告)号:US10860858B2

    公开(公告)日:2020-12-08

    申请号:US16009559

    申请日:2018-06-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and computer readable media that utilize a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices. For example, systems described herein include training and/or utilizing a combination of trained visual and text-based prediction models to determine predicted performance metrics for a digital video. The systems described herein can further utilize a multi-modal combination model to determine a combined performance metric that considers both visual and textual performance metrics of the digital video. The systems described herein can further select one or more digital videos for distribution to one or more client devices based on combined performance metrics associated with the digital videos.

    Generating digital video summaries utilizing aesthetics, relevancy, and generative neural networks

    公开(公告)号:US10650245B2

    公开(公告)日:2020-05-12

    申请号:US16004170

    申请日:2018-06-08

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital video summaries based on analyzing a digital video utilizing a relevancy neural network, an aesthetic neural network, and/or a generative neural network. For example, the disclosed systems can utilize an aesthetics neural network to determine aesthetics scores for frames of a digital video and a relevancy neural network to generate importance scores for frames of the digital video. Utilizing the aesthetic scores and relevancy scores, the disclosed systems can select a subset of frames and apply a generative reconstructor neural network to create a digital video reconstruction. By comparing the digital video reconstruction and the original digital video, the disclosed systems can accurately identify representative frames and flexibly generate a variety of different digital video summaries.

    Digital content consumption analysis

    公开(公告)号:US10567838B2

    公开(公告)日:2020-02-18

    申请号:US14503802

    申请日:2014-10-01

    Applicant: Adobe Inc.

    Abstract: Content consumption session progress is predicted based on historical observations of how users have interacted with a repository of digital content. This is approached as a matrix completion problem. Information extracted from tracking logs maintained by one or more content providers is used to estimate the extent to which various content items are consumed. The extracted session progress data is used to populate a session progress matrix in which each matrix element represents a session progress for a particular user consuming a particular content item. This matrix, which in principle will be highly (≳95%) sparse, can be completed using a collaborative filtering matrix completion technique. The values obtained as a result of completing the session progress matrix represent predictions with respect to how much of a given content item will be consumed by a given user.

    UTILIZING A TRAINED MULTI-MODAL COMBINATION MODEL FOR CONTENT AND TEXT-BASED EVALUATION AND DISTRIBUTION OF DIGITAL VIDEO CONTENT TO CLIENT DEVICES

    公开(公告)号:US20190384981A1

    公开(公告)日:2019-12-19

    申请号:US16009559

    申请日:2018-06-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and computer readable media that utilize a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices. For example, systems described herein include training and/or utilizing a combination of trained visual and text-based prediction models to determine predicted performance metrics for a digital video. The systems described herein can further utilize a multi-modal combination model to determine a combined performance metric that considers both visual and textual performance metrics of the digital video. The systems described herein can further select one or more digital videos for distribution to one or more client devices based on combined performance metrics associated with the digital videos.

    CODEBOOK GENERATION FOR CLOUD-BASED VIDEO APPLICATIONS

    公开(公告)号:US20190208208A1

    公开(公告)日:2019-07-04

    申请号:US16295154

    申请日:2019-03-07

    Applicant: Adobe Inc.

    CPC classification number: H04N19/13 H04N19/176 H04N19/196 H04N19/94

    Abstract: Techniques are disclosed for the improvement of vector quantization (VQ) codebook generation. The improved codebooks may be used for compression in cloud-based video applications. VQ achieves compression by vectorizing input video streams, matching those vectors to codebook vector entries, and replacing them with indexes of the matched codebook vectors along with residual vectors to represent the difference between the input stream vector and the codebook vector. The combination of index and residual is generally smaller than the input stream vector which they collectively encode, thus providing compression. The improved codebook may be generated from training video streams by grouping together similar types of data (e.g., image data, motion data, control data) from the video stream to generate longer vectors having higher dimensions and greater structure. This improves the ability of VQ to remove redundancy and thus increase compression efficiency. Storage space is thus reduced and video transmission may be faster.

    Actively-learned context modeling for image compression

    公开(公告)号:US12219180B2

    公开(公告)日:2025-02-04

    申请号:US17749846

    申请日:2022-05-20

    Applicant: Adobe Inc.

    Abstract: Embodiments described herein provide methods and systems for facilitating actively-learned context modeling. In one embodiment, a subset of data is selected from a training dataset corresponding with an image to be compressed, the subset of data corresponding with a subset of data of pixels of the image. A context model is generated using the selected subset of data. The context model is generally in the form of a decision tree having a set of leaf nodes. Entropy values corresponding with each leaf node of the set of leaf nodes are determined. Each entropy value indicates an extent of diversity of context associated with the corresponding leaf node. Additional data from the training dataset is selected based on the entropy values corresponding with the leaf nodes. The updated subset of data is used to generate an updated context model for use in performing compression of the image.

Patent Agency Ranking