Abstract:
A method of generating a moving thumbnail is disclosed. The method includes sampling video frames of a video item. The method further includes determining frame-level quality scores for the sampled video frames. The method also includes determining multiple group-level quality scores for multiple groups of the sampled video frames using the frame-level quality scores of the sampled video frames. The method further includes selecting one of the groups of the sampled video frames based on the multiple group-level quality scores. The method includes creating a moving thumbnail using a subset of the video frames that have timestamps within a range from the start timestamp to the end timestamp.
Abstract:
A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method selects an entity from a plurality of entities identifying characteristics of a video item, where the video item has associated metadata. The computer-implemented method receives probabilities of existence of the entity in video frames of the video item, and selects a video frame determined to comprise the entity responsive to determining the video frame having a probability of existence of the entity greater than zero. The computer-implemented method determines a scaling factor for the probability of existence of the entity using the metadata of the video item, and determines an adjusted probability of existence of the entity by using the scaling factor to adjust the probability of existence of the entity. The computer-implemented method labels the video frame with the adjusted probability of existence.
Abstract:
A method of generating a moving thumbnail is disclosed. The method includes sampling video frames of a video item. The method further includes determining frame-level quality scores for the sampled video frames. The method also includes determining multiple group-level quality scores for multiple groups of the sampled video frames using the frame-level quality scores of the sampled video frames. The method further includes selecting one of the groups of the sampled video frames based on the multiple group-level quality scores. The method includes creating a moving thumbnail using a subset of the video frames that have timestamps within a range from the start timestamp to the end timestamp.
Abstract:
A system and method of annotating an application, including obtaining input signals associated with a target application, wherein the input signals are obtained from a plurality of sources, obtaining first annotation data from the obtained input signals, generating second annotation data in a machine-understandable form based on the first annotation data, and associating the second annotation data with the target application.
Abstract:
A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
Abstract:
A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
Abstract:
Facilitation of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a system includes an aggregation component that aggregates signals indicative of initial entities for content and initial scores associated with the initial entities generated by one or more content annotation sources; and a mapping component that maps the initial scores to calibrated scores within a defined range. The system also includes a linear aggregation component that: applies selected weights to the calibrated scores, wherein the selected weights are based on joint performance conditions; and combines the weighted, calibrated scores based on a selected linear aggregation model of a plurality of linear aggregation models to generate a final score. The system also includes an annotation component that determines whether to annotate the content with one of the initial entities based on a comparison of the final score and a defined threshold value.
Abstract:
A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.