-
公开(公告)号:US12033669B2
公开(公告)日:2024-07-09
申请号:US17330702
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/00 , G06F3/0482 , G06F3/0486 , G11B27/02 , G11B27/036 , G11B27/10 , G11B27/031 , G11B27/36
CPC classification number: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
-
公开(公告)号:US11887629B2
公开(公告)日:2024-01-30
申请号:US17330689
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/00 , G11B27/036 , G06F3/0486 , G06F3/0482 , G11B27/02
CPC classification number: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
-
公开(公告)号:US11501102B2
公开(公告)日:2022-11-15
申请号:US16691205
申请日:2019-11-21
Applicant: Adobe Inc.
Inventor: Justin Salamon , Yu Wang , Nicholas J. Bryan
Abstract: Certain embodiments involve techniques for automatically identifying sounds in an audio recording that match a selected sound. An audio search and editing system receives the audio recording and preprocesses the audio recording into audio portions. The audio portions are provided as a query to the neural network that includes a trained embedding model used to analyze the audio portions in view of the selected sound to estimate feature vectors. The audio search and editing system compares the feature vectors for the audio portions against the feature vector for the selected sound and the feature vector for the negative samples to generate an audio score that is a numerical representation of the level of similarity between the audio portion and the selected sound and uses the audio scores to classify the audio portions into a first class of matching sounds and a second class of non-matching sounds.
-
公开(公告)号:US11308329B2
公开(公告)日:2022-04-19
申请号:US16868805
申请日:2020-05-07
Applicant: Adobe Inc.
Inventor: Justin Salamon , Bryan Russell , Karren Yang
Abstract: A computer system is trained to understand audio-visual spatial correspondence using audio-visual clips having multi-channel audio. The computer system includes an audio subnetwork, video subnetwork, and pretext subnetwork. The audio subnetwork receives the two channels of audio from the audio-visual clips, and the video subnetwork receives the video frames from the audio-visual clips. In a subset of the audio-visual clips the audio-visual spatial relationship is misaligned, causing the audio-visual spatial cues for the audio and video to be incorrect. The audio subnetwork outputs an audio feature vector for each audio-visual clip, and the video subnetwork outputs a video feature vector for each audio-visual clip. The audio and video feature vectors for each audio-visual clip are merged and provided to the pretext subnetwork, which is configured to classify the merged vector as either having a misaligned audio-visual spatial relationship or not. The subnetworks are trained based on the loss calculated from the classification.
-
公开(公告)号:US11887371B2
公开(公告)日:2024-01-30
申请号:US17330718
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
CPC classification number: G06V20/49 , G06V10/751 , G06V20/46 , G06V40/161 , G11B27/10
Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
-
公开(公告)号:US20220076026A1
公开(公告)日:2022-03-10
申请号:US17330718
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
-
公开(公告)号:US20210158086A1
公开(公告)日:2021-05-27
申请号:US16691205
申请日:2019-11-21
Applicant: Adobe Inc.
Inventor: Justin Salamon , Yu Wang , Nicholas J. Bryan
Abstract: Certain embodiments involve techniques for automatically identifying sounds in an audio recording that match a selected sound. An audio search and editing system receives the audio recording and preprocesses the audio recording into audio portions. The audio portions are provided as a query to the neural network that includes a trained embedding model used to analyze the audio portions in view of the selected sound to estimate feature vectors. The audio search and editing system compares the feature vectors for the audio portions against the feature vector for the selected sound and the feature vector for the negative samples to generate an audio score that is a numerical representation of the level of similarity between the audio portion and the selected sound and uses the audio scores to classify the audio portions into a first class of matching sounds and a second class of non-matching sounds.
-
公开(公告)号:US20220076707A1
公开(公告)日:2022-03-10
申请号:US17330702
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/036 , G06F3/0486 , G06F3/0482
Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
-
公开(公告)号:US20220076706A1
公开(公告)日:2022-03-10
申请号:US17330689
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
-
公开(公告)号:US20210350135A1
公开(公告)日:2021-11-11
申请号:US16868805
申请日:2020-05-07
Applicant: Adobe Inc.
Inventor: Justin Salamon , Bryan Russell , Karren Yang
Abstract: A computer system is trained to understand audio-visual spatial correspondence using audio-visual clips having multi-channel audio. The computer system includes an audio subnetwork, video subnetwork, and pretext subnetwork. The audio subnetwork receives the two channels of audio from the audio-visual clips, and the video subnetwork receives the video frames from the audio-visual clips. In a subset of the audio-visual clips the audio-visual spatial relationship is misaligned, causing the audio-visual spatial cues for the audio and video to be incorrect. The audio subnetwork outputs an audio feature vector for each audio-visual clip, and the video subnetwork outputs a video feature vector for each audio-visual clip. The audio and video feature vectors for each audio-visual clip are merged and provided to the pretext subnetwork, which is configured to classify the merged vector as either having a misaligned audio-visual spatial relationship or not. The subnetworks are trained based on the loss calculated from the classification.
-
-
-
-
-
-
-
-
-