UTILIZING MACHINE LEARNING MODELS TO GENERATE ASPECT-BASED TRANSCRIPT SUMMARIES

    公开(公告)号:US20250077775A1

    公开(公告)日:2025-03-06

    申请号:US18457794

    申请日:2023-08-29

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating aspect-based summaries utilizing deep learning. In particular, in one or more embodiments, the disclosed systems access a transcript comprising sentences. The disclosed systems generate, utilizing a sentence classification machine learning model, aspect labels for the sentences of the transcript. The disclosed systems organize the sentences based on the aspect labels. The disclosed systems generate, utilizing a summary machine learning model, a summary of the transcript for each aspect of the plurality of aspects from the organized sentences.

    GENERATING SYNTHETIC CODE-SWITCHED DATA FOR TRAINING LANGUAGE MODELS

    公开(公告)号:US20230259718A1

    公开(公告)日:2023-08-17

    申请号:US17651555

    申请日:2022-02-17

    Applicant: Adobe Inc.

    CPC classification number: G06F40/58 G06F40/47 G06N3/0454 G06N3/08

    Abstract: Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.

    IMAGE CAPTIONING
    5.
    发明公开
    IMAGE CAPTIONING 审中-公开

    公开(公告)号:US20230153522A1

    公开(公告)日:2023-05-18

    申请号:US17455533

    申请日:2021-11-18

    Applicant: ADOBE INC.

    CPC classification number: G06F40/253 G06K9/6256 G06K9/6262 G06F16/583

    Abstract: Systems and methods for image captioning are described. One or more aspects of the systems and methods include generating a training caption for a training image using an image captioning network; encoding the training caption using a multi-modal encoder to obtain an encoded training caption; encoding the training image using the multi-modal encoder to obtain an encoded training image; computing a reward function based on the encoded training caption and the encoded training image; and updating parameters of the image captioning network based on the reward function.

    USING NEURAL NETWORKS TO DETECT INCONGRUENCE BETWEEN HEADLINES AND BODY TEXT OF DOCUMENTS

    公开(公告)号:US20230153341A1

    公开(公告)日:2023-05-18

    申请号:US17528901

    申请日:2021-11-17

    Applicant: Adobe Inc.

    Inventor: Seunghyun Yoon

    CPC classification number: G06F16/35 G06K9/00469 G06N3/04 G06K9/6256

    Abstract: An incongruent headline detection system receives a request to determine a headline incongruence score for an electronic document. The incongruent headline detection system determines the headline incongruence score for the electronic document by applying a machine learning model to the electronic document. Applying the machine learning model to the electronic document includes generating a graph representing a textual similarity between a headline of the electronic document and each of a plurality of paragraphs of the electronic document and determining the headline incongruence score using the graph. The incongruent headline detection system transmits, responsive to the request, the headline incongruence score for the electronic document.

    Multitask Machine-Learning Model Training and Training Data Augmentation

    公开(公告)号:US20230419164A1

    公开(公告)日:2023-12-28

    申请号:US17846428

    申请日:2022-06-22

    Applicant: Adobe Inc.

    CPC classification number: G06N20/00

    Abstract: Multitask machine-learning model training and training data augmentation techniques are described. In one example, training is performed for multiple tasks simultaneously as part of training a multitask machine-learning model using question pairs. Examples of the multiple tasks include question summarization and recognizing question entailment. Further, a loss function is described that incorporates a parameter sharing loss that is configured to adjust an amount that parameters are shared between corresponding layers trained for the first and second tasks, respectively. In an implementation, training data augmentation techniques are also employed by synthesizing question pairs, automatically and without user intervention, to improve accuracy in model training.

    BI-DIRECTIONAL RECURRENT ENCODERS WITH MULTI-HOP ATTENTION FOR SPEECH EMOTION RECOGNITION

    公开(公告)号:US20220076693A1

    公开(公告)日:2022-03-10

    申请号:US17526810

    申请日:2021-11-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.

Patent Agency Ranking