-
公开(公告)号:US12014548B2
公开(公告)日:2024-06-18
申请号:US17805075
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11630562B2
公开(公告)日:2023-04-18
申请号:US17017366
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/04847 , G06F3/04845 , G06F3/0485
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US11995894B2
公开(公告)日:2024-05-28
申请号:US17017353
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Hijung Shin , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Xue Bai
CPC classification number: G06V20/49 , G06T7/10 , G06V20/41 , G06T2207/10016
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.
-
公开(公告)号:US11810358B2
公开(公告)日:2023-11-07
申请号:US17330677
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović
CPC classification number: G06V20/49 , G06F18/22 , G11B27/031 , G06V10/759
Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.
-
公开(公告)号:US11450112B2
公开(公告)日:2022-09-20
申请号:US17017344
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US20220075820A1
公开(公告)日:2022-03-10
申请号:US17017370
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Morgan Nicole Evans , Najika Skyler Halsema Yoo , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F16/738 , G06T13/80 , G06F3/0482 , G06F3/0484 , G06F16/74 , G06F16/735 , G06F16/75
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
-
公开(公告)号:US12033669B2
公开(公告)日:2024-07-09
申请号:US17330702
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/00 , G06F3/0482 , G06F3/0486 , G11B27/02 , G11B27/036 , G11B27/10 , G11B27/031 , G11B27/36
CPC classification number: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
-
公开(公告)号:US11899917B2
公开(公告)日:2024-02-13
申请号:US17969536
申请日:2022-10-19
Applicant: Adobe Inc.
Inventor: Seth Walker , Joy O Kim , Aseem Agarwala , Joel Richard Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/04847 , G06F3/0485 , G06F3/04845
CPC classification number: G06F3/04847 , G06F3/0485 , G06F3/04845 , G06F2203/04806
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US11887629B2
公开(公告)日:2024-01-30
申请号:US17330689
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/00 , G11B27/036 , G06F3/0486 , G06F3/0482 , G11B27/02
CPC classification number: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
-
公开(公告)号:US20220301179A1
公开(公告)日:2022-09-22
申请号:US17805907
申请日:2022-06-08
Applicant: ADOBE INC.
Inventor: Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic
Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.
-
-
-
-
-
-
-
-
-