-
公开(公告)号:US20200382755A1
公开(公告)日:2020-12-03
申请号:US16428201
申请日:2019-05-31
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Oliver Wang , Cuong Nguyen
IPC: H04N13/111 , H04N13/383 , H04N13/282
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
-
22.
公开(公告)号:US20200241730A1
公开(公告)日:2020-07-30
申请号:US16847765
申请日:2020-04-14
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Brian Williams
IPC: G06F3/0481 , G06T19/20 , G06F9/451 , G06F3/0346 , G06T19/00
Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
-
公开(公告)号:US11922695B2
公开(公告)日:2024-03-05
申请号:US17805076
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11893794B2
公开(公告)日:2024-02-06
申请号:US17805080
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11887371B2
公开(公告)日:2024-01-30
申请号:US17330718
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
CPC classification number: G06V20/49 , G06V10/751 , G06V20/46 , G06V40/161 , G11B27/10
Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
-
公开(公告)号:US11880408B2
公开(公告)日:2024-01-23
申请号:US17017370
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Morgan Nicole Evans , Najika Skyler Halsema Yoo , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F16/75 , G06F16/738 , G06T13/80 , G06F3/0482 , G06F16/74 , G06F16/735 , G06F3/0484
CPC classification number: G06F16/738 , G06F3/0482 , G06F3/0484 , G06F16/735 , G06F16/743 , G06F16/75 , G06T13/80
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
-
公开(公告)号:US11875568B2
公开(公告)日:2024-01-16
申请号:US17805076
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11822602B2
公开(公告)日:2023-11-21
申请号:US17017370
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Morgan Nicole Evans , Najika Skyler Halsema Yoo , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F16/75 , G06F16/738 , G06T13/80 , G06F3/0482 , G06F16/74 , G06F16/735 , G06F3/0484
CPC classification number: G06F16/738 , G06F3/0482 , G06F3/0484 , G06F16/735 , G06F16/743 , G06F16/75 , G06T13/80
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
-
公开(公告)号:US11631434B2
公开(公告)日:2023-04-18
申请号:US17017362
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G11B27/034 , G11B27/34 , G06F16/783 , G06F3/0485 , G06F16/75 , G06V20/40
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US20220076705A1
公开(公告)日:2022-03-10
申请号:US17017362
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G11B27/034 , G11B27/34 , G06F16/783 , G06F16/75 , G06F3/0485 , G06K9/00
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
-
-
-
-
-
-
-
-