SEGMENTATION AND HIERARCHICAL CLUSTERING OF VIDEO

    公开(公告)号:US20220076023A1

    公开(公告)日:2022-03-10

    申请号:US17017344

    申请日:2020-09-10

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.

    Using machine-learning models to determine movements of a mouth corresponding to live speech

    公开(公告)号:US11211060B2

    公开(公告)日:2021-12-28

    申请号:US16887418

    申请日:2020-05-29

    Applicant: Adobe Inc.

    Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.

    USING MACHINE-LEARNING MODELS TO DETERMINE MOVEMENTS OF A MOUTH CORRESPONDING TO LIVE SPEECH

    公开(公告)号:US20190392823A1

    公开(公告)日:2019-12-26

    申请号:US16016418

    申请日:2018-06-22

    Applicant: Adobe Inc.

    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.

Patent Agency Ranking