Predicting video start times for maximizing user engagement

    公开(公告)号:US10390067B1

    公开(公告)日:2019-08-20

    申请号:US15593448

    申请日:2017-05-12

    Applicant: Google Inc.

    Abstract: Implementations disclose predicting video start times for maximizing user engagement. A method includes receiving a first content item comprising content item segments, processing the first content item using a trained machine learning model that is trained based on interaction signals and audio-visual content features of a training set of training segments of training content items, and obtaining, based on the processing of the first content item using the trained machine learning model, one or more outputs comprising salience scores for the content item segments, the salience scores indicating which content item segment of the content item segments is to be selected as a starting point for playback of the first content item.

    Predicting video start times for maximizing user engagement

    公开(公告)号:US09659218B1

    公开(公告)日:2017-05-23

    申请号:US14699243

    申请日:2015-04-29

    Applicant: Google Inc.

    CPC classification number: G06K9/00744

    Abstract: Implementations disclose predicting video start times for maximizing user engagement. A method includes applying a machine-learned model to audio-visual content features of segments of a target content item, the machine-learned model trained based on user interaction signals and audio-visual content features of a training set of content item segments, calculating, based on applying the machine-learned model, a salience score for each of the segments of the target content item, and selecting, based on the calculated salience scores, one of the segments of the target content item as a starting point for playback of the target content item.

    Selecting and Presenting Representative Frames for Video Previews
    4.
    发明申请
    Selecting and Presenting Representative Frames for Video Previews 有权
    选择和呈现视频预览的代表帧

    公开(公告)号:US20160070962A1

    公开(公告)日:2016-03-10

    申请号:US14848216

    申请日:2015-09-08

    Applicant: Google Inc.

    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.

    Abstract translation: 提供了一种用于选择视频的代表性帧的计算机实现的方法。 该方法包括接收视频并识别视频的每个帧的一组特征。 特征包括基于帧的特征和语义特征。 识别语义概念的可能性的语义特征作为视频帧中的内容呈现。 随后生成视频的一组视频段。 每个视频段包括来自视频的帧的按时间顺序的子集,并且每个帧与语义特征中的至少一个相关联。 该方法至少基于语义特征为每个视频段的帧子集的每帧生成分数,并且基于视频段中的帧的分数为每个视频段选择代表性的帧。 代表性的框架代表和总结视频段。

    Feature-based Video Annotation
    5.
    发明申请
    Feature-based Video Annotation 有权
    基于功能的视频注释

    公开(公告)号:US20170046573A1

    公开(公告)日:2017-02-16

    申请号:US14823946

    申请日:2015-08-11

    Applicant: Google Inc.

    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.

    Abstract translation: 系统和方法提供用实体注释视频和在视频帧内存在实体的相关概率。 计算机实现的方法从识别视频项目的特征的多个实体中识别实体。 计算机实现的方法基于多个特征的特征的值来选择与实体相关的一组特征,使用该特征集来确定实体的分类器,并且基于该特征确定该实体的聚合校准功能 的功能集。 计算机实现的方法从视频项目中选择视频帧,其中具有相关联特征的视频帧,并且使用分类器和聚合校准功能基于相关联的特征来确定实体的存在概率。

    Facilitating content entity annotation while satisfying joint performance conditions

    公开(公告)号:US09830361B1

    公开(公告)日:2017-11-28

    申请号:US14096950

    申请日:2013-12-04

    Applicant: Google Inc.

    CPC classification number: G06F17/3053 G06F17/241 G06F17/278 G06F17/30598

    Abstract: Facilitation of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a system includes an aggregation component that aggregates signals indicative of initial entities for content and initial scores associated with the initial entities generated by one or more content annotation sources; and a mapping component that maps the initial scores to calibrated scores within a defined range. The system also includes a linear aggregation component that: applies selected weights to the calibrated scores, wherein the selected weights are based on joint performance conditions; and combines the weighted, calibrated scores based on a selected linear aggregation model of a plurality of linear aggregation models to generate a final score. The system also includes an annotation component that determines whether to annotate the content with one of the initial entities based on a comparison of the final score and a defined threshold value.

Patent Agency Ranking