-
公开(公告)号:US20190340419A1
公开(公告)日:2019-11-07
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US10825224B2
公开(公告)日:2020-11-03
申请号:US16196680
申请日:2018-11-20
Applicant: Adobe Inc.
Inventor: Geoffrey Heller , Jakub Fiser , David P. Simons
Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.
-
公开(公告)号:US20200160581A1
公开(公告)日:2020-05-21
申请号:US16196680
申请日:2018-11-20
Applicant: Adobe Inc.
Inventor: Geoffrey Heller , Jakub Fiser , David P. Simons
Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.
-
公开(公告)号:US10607065B2
公开(公告)日:2020-03-31
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US10402481B2
公开(公告)日:2019-09-03
申请号:US15821254
申请日:2017-11-22
Applicant: Adobe Inc.
Inventor: David P. Simons , James Acquavella , Gregory Scott Evans , Joel Brandt
IPC: G06F17/00 , G06F17/22 , G06F3/00 , G06F3/0482 , G06F3/16 , G06F17/24 , G06T13/80 , G06F16/13 , G06F16/185 , G06T13/00 , G06F16/21
Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
-
-
-
-