Dynamic matrix convolution with channel fusion

    公开(公告)号:US12223412B2

    公开(公告)日:2025-02-11

    申请号:US17123697

    申请日:2020-12-16

    Abstract: A computer device for automatic feature detection comprises a processor, a communication device, and a memory configured to hold instructions executable by the processor to instantiate a dynamic convolution neural network, receive input data via the communication network, and execute the dynamic convolution neural network to automatically detect features in the input data. The dynamic convolution neural network compresses the input data from an input space having a dimensionality equal to a predetermined number of channels into an intermediate space having a dimensionality less than the number of channels. The dynamic convolution neural network dynamically fuses the channels into an intermediate representation within the intermediate space and expands the intermediate representation from the intermediate space to an expanded representation in an output space having a higher dimensionality than the dimensionality of the intermediate space. The features in the input data are automatically detected based on the expanded representation.

    Video frame action detection using gated history

    公开(公告)号:US12192543B2

    公开(公告)日:2025-01-07

    申请号:US18393664

    申请日:2023-12-21

    Abstract: Example solutions for video frame action detection use a gated history and include: receiving a video stream comprising a plurality of video frames; grouping the plurality of video frames into a set of present video frames and a set of historical video frames, the set of present video frames comprising a current video frame; determining a set of attention weights for the set of historical video frames, the set of attention weights indicating how informative a video frame is for predicting action in the current video frame; weighting the set of historical video frames with the set of attention weights to produce a set of weighted historical video frames; and based on at least the set of weighted historical video frames and the set of present video frames, generating an action prediction for the current video frame.

    Task-aware recommendation of hyperparameter configurations

    公开(公告)号:US11544561B2

    公开(公告)日:2023-01-03

    申请号:US16875782

    申请日:2020-05-15

    Abstract: Providing a task-aware recommendation of hyperparameter configurations for a neural network architecture. First, a joint space of tasks and hyperparameter configurations are constructed using a plurality of tasks (each of which corresponds to a dataset) and a plurality of hyperparameter configurations. The joint space is used as training data to train and optimize a performance prediction network, such that for a given unseen task corresponding to one of the plurality of tasks and a given hyperparameter configuration corresponding to one of the plurality of hyperparameter configurations, the performance prediction network is configured to predict performance that is to be achieved for the unseen task using the hyperparameter configuration.

    Prior informed pose and scale estimation

    公开(公告)号:US11443455B2

    公开(公告)日:2022-09-13

    申请号:US16744068

    申请日:2020-01-15

    Abstract: A scale and pose estimation method for a camera system is disclosed. Camera data for a scene acquired by the camera system is received. A rotation prior parameter characterizing a gravity direction is received. A scale prior parameter characterizing scale of the camera system is received. A cost of a cost function is calculated for a similarity transformation that is configured to encode a scale and pose of the camera system. The cost of the cost function is influenced by the rotation prior parameter and the scale prior parameter. A solved similarity transformation is determined upon calculating a cost for the cost function that is less than a threshold cost. An estimated scale and pose of the camera system is output based on the solved similarity transformation.

    Video frame action detection using gated history

    公开(公告)号:US11895343B2

    公开(公告)日:2024-02-06

    申请号:US17852310

    申请日:2022-06-28

    CPC classification number: H04N21/23418 G06T7/246 G06V20/46 G06T2207/10021

    Abstract: Example solutions for video frame action detection use a gated history and include: receiving a video stream comprising a plurality of video frames; grouping the plurality of video frames into a set of present video frames and a set of historical video frames, the set of present video frames comprising a current video frame; determining a set of attention weights for the set of historical video frames, the set of attention weights indicating how informative a video frame is for predicting action in the current video frame; weighting the set of historical video frames with the set of attention weights to produce a set of weighted historical video frames; and based on at least the set of weighted historical video frames and the set of present video frames, generating an action prediction for the current video frame.

    Leveraging unsupervised meta-learning to boost few-shot action recognition

    公开(公告)号:US12087043B2

    公开(公告)日:2024-09-10

    申请号:US17535517

    申请日:2021-11-24

    Abstract: The disclosure herein describes preparing and using a cross-attention model for action recognition using pre-trained encoders and novel class fine-tuning. Training video data is transformed into augmented training video segments, which are used to train an appearance encoder and an action encoder. The appearance encoder is trained to encode video segments based on spatial semantics and the action encoder is trained to encode video segments based on spatio-temporal semantics. A set of hard-mined training episodes are generated using the trained encoders. The cross-attention module is then trained for action-appearance aligned classification using the hard-mined training episodes. Then, support video segments are obtained, wherein each support video segment is associated with video classes. The cross-attention module is fine-tuned using the obtained support video segments and the associated video classes. A query video segment is obtained and classified as a video class using the fine-tuned cross-attention module.

Patent Agency Ranking