Algorithmic approach to finding correspondence between graphical elements

    公开(公告)号:US11182905B2

    公开(公告)日:2021-11-23

    申请号:US16825583

    申请日:2020-03-20

    Applicant: Adobe Inc.

    Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.

    POSE SELECTION AND ANIMATION OF CHARACTERS USING VIDEO DATA AND TRAINING TECHNIQUES

    公开(公告)号:US20210158593A1

    公开(公告)日:2021-05-27

    申请号:US16692471

    申请日:2019-11-22

    Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.

    POSE SELECTION AND ANIMATION OF CHARACTERS USING VIDEO DATA AND TRAINING TECHNIQUES

    公开(公告)号:US20210158565A1

    公开(公告)日:2021-05-27

    申请号:US16692450

    申请日:2019-11-22

    Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.

    Integrated computing environment for managing and presenting design iterations

    公开(公告)号:US10896161B2

    公开(公告)日:2021-01-19

    申请号:US15908079

    申请日:2018-02-28

    Applicant: Adobe Inc.

    Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.

    Using machine-learning models to determine movements of a mouth corresponding to live speech

    公开(公告)号:US10699705B2

    公开(公告)日:2020-06-30

    申请号:US16016418

    申请日:2018-06-22

    Applicant: Adobe Inc.

    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.

    GENERATING TARGET-CHARACTER-ANIMATION SEQUENCES BASED ON STYLE-AWARE PUPPETS PATTERNED AFTER SOURCE-CHARACTER-ANIMATION SEQUENCES

    公开(公告)号:US20200035010A1

    公开(公告)日:2020-01-30

    申请号:US16047839

    申请日:2018-07-27

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.

    INTEGRATED COMPUTING ENVIRONMENT FOR MANAGING AND PRESENTING DESIGN ITERATIONS

    公开(公告)号:US20190266265A1

    公开(公告)日:2019-08-29

    申请号:US15908079

    申请日:2018-02-28

    Applicant: Adobe Inc.

    Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.

Patent Agency Ranking