-
公开(公告)号:US12206930B2
公开(公告)日:2025-01-21
申请号:US18154412
申请日:2023-01-13
Applicant: Adobe Inc.
Inventor: Kim Pascal Pimmel , Stephen Joseph Diverdi , Jiaju MA , Rubaiat Habib , Li-Yi Wei , Hijung Shin , Deepali Aneja , John G. Nelson , Wilmot Li , Dingzeyu Li , Lubomira Assenova Dontcheva , Joel Richard Brandt
IPC: H04N21/431 , G06F3/04812 , G06F3/0482 , H04N21/4402
Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
-
公开(公告)号:US11182905B2
公开(公告)日:2021-11-23
申请号:US16825583
申请日:2020-03-20
Applicant: Adobe Inc.
Inventor: Hijung Shin , Holger Winnemoeller , Wilmot Li
Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.
-
公开(公告)号:US20210158593A1
公开(公告)日:2021-05-27
申请号:US16692471
申请日:2019-11-22
Applicant: Adobe Inc. , Princeton University
Inventor: Wilmot Li , Hijung Shin , Adam Finkelstein , Nora Willett
Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.
-
公开(公告)号:US20210158565A1
公开(公告)日:2021-05-27
申请号:US16692450
申请日:2019-11-22
Applicant: Adobe Inc. , Princeton University
Inventor: Wilmot Li , Hijung Shin , Adam Finkelstein , Nora Willett
Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.
-
公开(公告)号:US10896161B2
公开(公告)日:2021-01-19
申请号:US15908079
申请日:2018-02-28
Applicant: Adobe Inc.
Inventor: Lubomira A. Dontcheva , Wilmot Li , Morgan Dixon , Jasper O'Leary , Holger Winnemoeller
IPC: G06F3/0484 , G06F3/0482 , G06F40/169 , G06F16/21
Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.
-
16.
公开(公告)号:US10699705B2
公开(公告)日:2020-06-30
申请号:US16016418
申请日:2018-06-22
Applicant: Adobe Inc.
Inventor: Wilmot Li , Jovan Popovic , Deepali Aneja , David Simons
IPC: G10L15/197 , G06N3/04 , G06N3/08 , G10L15/02 , G10L15/06 , G10L21/0316 , G10L25/21 , G10L25/24
Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.
-
17.
公开(公告)号:US20200035010A1
公开(公告)日:2020-01-30
申请号:US16047839
申请日:2018-07-27
Applicant: Adobe Inc. , Czech Technical University in Prague
Inventor: Vladimir Kim , Wilmot Li , Marek Dvoroznák , Daniel Sýkora
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
-
公开(公告)号:US20190266265A1
公开(公告)日:2019-08-29
申请号:US15908079
申请日:2018-02-28
Applicant: Adobe Inc.
Inventor: Lubomira A. Dontcheva , Wilmot Li , Morgan Dixon , Jasper O'Leary , Holger Winnemoeller
IPC: G06F17/30 , G06F17/24 , G06F3/0482 , G06F3/0484
Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.
-
-
-
-
-
-
-