-
公开(公告)号:US12014548B2
公开(公告)日:2024-06-18
申请号:US17805075
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11776188B2
公开(公告)日:2023-10-03
申请号:US17887685
申请日:2022-08-15
Applicant: Adobe Inc.
Inventor: Dingzeyu Li , Yang Zhou , Jose Ignacio Echevarria Vallespi , Elya Shechtman
CPC classification number: G06T13/205 , G06T13/40 , G06T17/20
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
-
公开(公告)号:US11630562B2
公开(公告)日:2023-04-18
申请号:US17017366
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/04847 , G06F3/04845 , G06F3/0485
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US20210136510A1
公开(公告)日:2021-05-06
申请号:US16674924
申请日:2019-11-05
Applicant: Adobe Inc.
Inventor: Zhenyu Tang , Timothy Langlois , Nicholas Bryan , Dingzeyu Li
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment. Furthermore, the disclosed system can augment training data for training the neural networks using frequency-dependent equalization information associated with measured and synthetic impulse responses.
-
公开(公告)号:US11995894B2
公开(公告)日:2024-05-28
申请号:US17017353
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Hijung Shin , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Xue Bai
CPC classification number: G06V20/49 , G06T7/10 , G06V20/41 , G06T2207/10016
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.
-
公开(公告)号:US11812254B2
公开(公告)日:2023-11-07
申请号:US17515918
申请日:2021-11-01
Applicant: Adobe Inc.
Inventor: Zhenyu Tang , Timothy Langlois , Nicholas Bryan , Dingzeyu Li
CPC classification number: H04S7/305 , G06N3/04 , G06N3/08 , H04S7/307 , H04S2400/11
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment. Furthermore, the disclosed system can augment training data for training the neural networks using frequency-dependent equalization information associated with measured and synthetic impulse responses.
-
公开(公告)号:US20220392131A1
公开(公告)日:2022-12-08
申请号:US17887685
申请日:2022-08-15
Applicant: Adobe Inc.
Inventor: Dingzeyu Li , Yang Zhou , Jose Ignacio Echevarria Vallespi , Elya Shechtman
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
-
公开(公告)号:US11450112B2
公开(公告)日:2022-09-20
申请号:US17017344
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US20220075820A1
公开(公告)日:2022-03-10
申请号:US17017370
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Morgan Nicole Evans , Najika Skyler Halsema Yoo , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F16/738 , G06T13/80 , G06F3/0482 , G06F3/0484 , G06F16/74 , G06F16/735 , G06F16/75
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
-
公开(公告)号:US12206930B2
公开(公告)日:2025-01-21
申请号:US18154412
申请日:2023-01-13
Applicant: Adobe Inc.
Inventor: Kim Pascal Pimmel , Stephen Joseph Diverdi , Jiaju MA , Rubaiat Habib , Li-Yi Wei , Hijung Shin , Deepali Aneja , John G. Nelson , Wilmot Li , Dingzeyu Li , Lubomira Assenova Dontcheva , Joel Richard Brandt
IPC: H04N21/431 , G06F3/04812 , G06F3/0482 , H04N21/4402
Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
-
-
-
-
-
-
-
-
-