Systems and Methods for Improved Video Understanding

    公开(公告)号:US20240428586A1

    公开(公告)日:2024-12-26

    申请号:US18827088

    申请日:2024-09-06

    Applicant: Google LLC

    Abstract: A computer-implemented method for classifying video data with improved accuracy includes obtaining, by a computing system comprising one or more computing devices, video data comprising a plurality of video frames; extracting, by the computing system, a plurality of spatiotemporal representations from the video data, the plurality of spatiotemporal representations comprising a representation of spatiotemporal information in the video data; providing, by the computing system, the plurality of spatiotemporal representations as input to a video understanding model, the video understanding model comprising a video transformer encoder model; and receiving, by the computing system, a classification output from the video understanding model.

    VIDEO LOCALIZATION USING ARTIFICIAL INTELLIGENCE

    公开(公告)号:US20240371164A1

    公开(公告)日:2024-11-07

    申请号:US18652703

    申请日:2024-05-01

    Applicant: Google LLC

    Abstract: Methods and systems for video localization using artificial intelligence are provided herein. A set of video embeddings representing features of one or more video frames of a media it em and a set of textual embeddings corresponding to an event associated with the media item are obtained. Fused video-textual data is generated based on the set of video embeddings and the set of textual embeddings. The fused video-textual data indicates features of the video frames of the media item and textual data pertaining to the media item. The fused video-textual data is provided as an input to an artificial intelligence (AI) model trained to perform multiple video localization tasks with respect to media items of a platform. One or move outputs of the AI model are obtained. A segment of the media item that depicts the event is determined based on the one or move outputs of the AI model.

    Pre-Training a Model Using Unlabeled Videos
    7.
    发明公开

    公开(公告)号:US20240127794A1

    公开(公告)日:2024-04-18

    申请号:US17957291

    申请日:2022-09-30

    Applicant: Google LLC

    CPC classification number: G10L15/063 G10L15/24 G10L15/26

    Abstract: Systems and methods method for performing captioning for image or video data are described herein. The method can include receiving unlabeled multimedia data, and outputting, from a machine learning model, one or more captions for the multimedia data. Training the machine learning model to create these outputs can include inputting a subset of video frames and a first utterance into the machine learning model, using the machine learning model to predict a predicted utterance based on the subset of video frames and the first utterance, and updating one or more parameters of the machine learning model based on a loss function that compares the predicted utterance with the second utterance.

    Attention Bottlenecks for Multimodal Fusion
    8.
    发明公开

    公开(公告)号:US20230177384A1

    公开(公告)日:2023-06-08

    申请号:US17545526

    申请日:2021-12-08

    Applicant: Google LLC

    CPC classification number: G06N20/00 G06N5/04

    Abstract: Example embodiments according to aspects of the present disclosure provide an example computer-implemented method for multimodal data processing with improved cross-modal attention. The example method includes inputting a multimodal sequence to an example machine-learned model. The example model includes a first modal processing stream receiving a first modal portion of the multimodal sequence and a second modal processing stream receiving a second modal portion of the multimodal sequence. The example model includes fusing the first modal processing stream and the second modal processing stream across one or more fusion layers of the machine-learned model through a plurality of cross-modal context encodings. The example method includes outputting an inference based at least in part on the plurality of cross-modal context encodings.

    Dense Video Object Captioning from Disjoint Vision

    公开(公告)号:US20250053753A1

    公开(公告)日:2025-02-13

    申请号:US18448508

    申请日:2023-08-11

    Applicant: Google LLC

    Abstract: Provided are a new task and model for dense video object captioning—detecting, tracking, and captioning trajectories of all objects in a video. This task unifies spatial and temporal understanding of the video, and requires fine-grained language description. Example implementations of the proposed model for dense video object captioning can be trained end-to-end and can include different models for spatial localization, tracking, and captioning. As such, some example implementations of the present disclosure can train the proposed model with a mixture of disjoint tasks, and leverage diverse, large-scale datasets which supervise different parts of an example proposed model. This results in noteworthy zero-shot performance.

    Systems and Methods for Improved Video Understanding

    公开(公告)号:US20240428587A1

    公开(公告)日:2024-12-26

    申请号:US18827133

    申请日:2024-09-06

    Applicant: Google LLC

    Abstract: A computer-implemented method for classifying video data with improved accuracy includes obtaining, by a computing system comprising one or more computing devices, video data comprising a plurality of video frames; extracting, by the computing system, a plurality of video tokens from the video data, the plurality of video tokens comprising a representation of spatiotemporal information in the video data; providing, by the computing system, the plurality of video tokens as input to a video understanding model, the video understanding model comprising a video transformer encoder model; and receiving, by the computing system, a classification output from the video understanding model.

Patent Agency Ranking