Image Retrieval with Deep Local Feature Descriptors and Attention-Based Keypoint Descriptors

    公开(公告)号:US20200004777A1

    公开(公告)日:2020-01-02

    申请号:US16558852

    申请日:2019-09-03

    Applicant: Google LLC

    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.

    Weakly-Supervised Action Localization by Sparse Temporal Pooling Network

    公开(公告)号:US20200272823A1

    公开(公告)日:2020-08-27

    申请号:US16625172

    申请日:2018-11-05

    Applicant: Google LLC

    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.

    Weakly-supervised action localization by sparse temporal pooling network

    公开(公告)号:US11881022B2

    公开(公告)日:2024-01-23

    申请号:US18181806

    申请日:2023-03-10

    Applicant: Google LLC

    CPC classification number: G06V20/40 G06F18/214 G06F18/24317 G06V20/44

    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.

    Weakly-supervised action localization by sparse temporal pooling network

    公开(公告)号:US11640710B2

    公开(公告)日:2023-05-02

    申请号:US16625172

    申请日:2018-11-05

    Applicant: Google LLC

    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.

    Image retrieval with deep local feature descriptors and attention-based keypoint descriptors

    公开(公告)号:US10650042B2

    公开(公告)日:2020-05-12

    申请号:US16558852

    申请日:2019-09-03

    Applicant: Google LLC

    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.

    Weakly-Supervised Action Localization by Sparse Temporal Pooling Network

    公开(公告)号:US20230215169A1

    公开(公告)日:2023-07-06

    申请号:US18181806

    申请日:2023-03-10

    Applicant: Google LLC

    CPC classification number: G06V20/40 G06F18/214 G06F18/24317 G06V20/44

    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.

Patent Agency Ranking