Animation Using Keyframing and Projected Dynamics Simulation

    公开(公告)号:US20190197758A1

    公开(公告)日:2019-06-27

    申请号:US16291585

    申请日:2019-03-04

    Applicant: Adobe Inc.

    Abstract: In embodiments of animation using keyframing and projected dynamics simulation, an animation object is displayed with handles associated with object regions for the animation object, each handle being selectable for setting animation constraints on an object region. An animation simulator receives a user input designating a particular handle with an animation constraint, and sets the animation constraint on the particular handle for the associated object region. The animation simulator also receives another user input, designating a timing of the object region associated with the particular handle of the animation object through multiple frames in an animation sequence. The animation simulator projects a simulation of the animation object utilizing a projected dynamics algorithm that applies physics to simulate the set of object regions of the animation object in the animation sequence, the simulation including simulating the object region associated with the particular handle based on the timing and the animation constraint.

    Interactive scene graph manipulation for visualization authoring

    公开(公告)号:US10290128B2

    公开(公告)日:2019-05-14

    申请号:US14937683

    申请日:2015-11-10

    Applicant: Adobe Inc.

    Abstract: Techniques for interactive scene graph manipulation for visualization authoring are described. In implementations, visual marks are grouped into containers. Each container includes layout settings independent of other containers, and the layout settings are individually adjustable. The visual marks are configured to represent data values. Additionally, the containers are nested in a hierarchy. Then, data visualizations are constructed for display via a user interface of a display device. For example, the data visualizations can be constructed by applying data values to the visual marks and layout settings of the containers to the visual marks grouped within the nested containers to generate the data visualizations.

    HIERARCHICAL SEGMENTATION BASED SOFTWARE TOOL USAGE IN A VIDEO

    公开(公告)号:US20220301313A1

    公开(公告)日:2022-09-22

    申请号:US17805076

    申请日:2022-06-02

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.

    HIERARCHICAL SEGMENTATION OF SCREEN CAPTURED, SCREENCASTED, OR STREAMED VIDEO

    公开(公告)号:US20220292831A1

    公开(公告)日:2022-09-15

    申请号:US17805080

    申请日:2022-06-02

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.

    HIERARCHICAL SEGMENTATION BASED ON VOICE-ACTIVITY

    公开(公告)号:US20220292830A1

    公开(公告)日:2022-09-15

    申请号:US17805075

    申请日:2022-06-02

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.

    Using machine-learning models to determine movements of a mouth corresponding to live speech

    公开(公告)号:US10699705B2

    公开(公告)日:2020-06-30

    申请号:US16016418

    申请日:2018-06-22

    Applicant: Adobe Inc.

    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.

Patent Agency Ranking