Video frame action detection using gated history

    公开(公告)号:US11895343B2

    公开(公告)日:2024-02-06

    申请号:US17852310

    申请日:2022-06-28

    CPC classification number: H04N21/23418 G06T7/246 G06V20/46 G06T2207/10021

    Abstract: Example solutions for video frame action detection use a gated history and include: receiving a video stream comprising a plurality of video frames; grouping the plurality of video frames into a set of present video frames and a set of historical video frames, the set of present video frames comprising a current video frame; determining a set of attention weights for the set of historical video frames, the set of attention weights indicating how informative a video frame is for predicting action in the current video frame; weighting the set of historical video frames with the set of attention weights to produce a set of weighted historical video frames; and based on at least the set of weighted historical video frames and the set of present video frames, generating an action prediction for the current video frame.

    Video frame action detection using gated history

    公开(公告)号:US12192543B2

    公开(公告)日:2025-01-07

    申请号:US18393664

    申请日:2023-12-21

    Abstract: Example solutions for video frame action detection use a gated history and include: receiving a video stream comprising a plurality of video frames; grouping the plurality of video frames into a set of present video frames and a set of historical video frames, the set of present video frames comprising a current video frame; determining a set of attention weights for the set of historical video frames, the set of attention weights indicating how informative a video frame is for predicting action in the current video frame; weighting the set of historical video frames with the set of attention weights to produce a set of weighted historical video frames; and based on at least the set of weighted historical video frames and the set of present video frames, generating an action prediction for the current video frame.

    Task-aware recommendation of hyperparameter configurations

    公开(公告)号:US11544561B2

    公开(公告)日:2023-01-03

    申请号:US16875782

    申请日:2020-05-15

    Abstract: Providing a task-aware recommendation of hyperparameter configurations for a neural network architecture. First, a joint space of tasks and hyperparameter configurations are constructed using a plurality of tasks (each of which corresponds to a dataset) and a plurality of hyperparameter configurations. The joint space is used as training data to train and optimize a performance prediction network, such that for a given unseen task corresponding to one of the plurality of tasks and a given hyperparameter configuration corresponding to one of the plurality of hyperparameter configurations, the performance prediction network is configured to predict performance that is to be achieved for the unseen task using the hyperparameter configuration.

    Leveraging unsupervised meta-learning to boost few-shot action recognition

    公开(公告)号:US12087043B2

    公开(公告)日:2024-09-10

    申请号:US17535517

    申请日:2021-11-24

    Abstract: The disclosure herein describes preparing and using a cross-attention model for action recognition using pre-trained encoders and novel class fine-tuning. Training video data is transformed into augmented training video segments, which are used to train an appearance encoder and an action encoder. The appearance encoder is trained to encode video segments based on spatial semantics and the action encoder is trained to encode video segments based on spatio-temporal semantics. A set of hard-mined training episodes are generated using the trained encoders. The cross-attention module is then trained for action-appearance aligned classification using the hard-mined training episodes. Then, support video segments are obtained, wherein each support video segment is associated with video classes. The cross-attention module is fine-tuned using the obtained support video segments and the associated video classes. A query video segment is obtained and classified as a video class using the fine-tuned cross-attention module.

    Computing system for expressive three-dimensional facial animation

    公开(公告)号:US11238885B2

    公开(公告)日:2022-02-01

    申请号:US16173491

    申请日:2018-10-29

    Abstract: A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.

Patent Agency Ranking