EFFICIENT NEURAL NETWORKS WITH ELABORATE MATRIX STRUCTURES IN MACHINE LEARNING ENVIRONMENTS

    公开(公告)号:US20250053814A1

    公开(公告)日:2025-02-13

    申请号:US18805370

    申请日:2024-08-14

    Abstract: A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.

    SAMPLE-ADAPTIVE 3D FEATURE CALIBRATION AND ASSOCIATION AGENT

    公开(公告)号:US20240296650A1

    公开(公告)日:2024-09-05

    申请号:US18572351

    申请日:2021-10-13

    CPC classification number: G06V10/44 G06V10/771 G06V10/82

    Abstract: Technology to conduct image sequence/video analysis can include a processor, and a memory coupled to the processor, the memory storing a neural network, the neural network comprising a plurality of convolution layers, a network depth relay structure comprising a plurality of network depth calibration layers, where each network depth calibration layer is coupled to an output of a respective one of the plurality of convolution layers, and a feature dimension relay structure comprising a plurality of feature dimension calibration slices, where the feature dimension relay structure is coupled to an output of another layer of the plurality of convolution layers. Each network depth calibration layer is coupled to a preceding network depth calibration layer via first hidden state and cell state signals, and each feature dimension calibration slice is coupled to a preceding feature dimension calibration slice via second hidden state and cell state signals.

    APPARATUS AND METHODS FOR THREE-DIMENSIONAL POSE ESTIMATION

    公开(公告)号:US20230298204A1

    公开(公告)日:2023-09-21

    申请号:US18000389

    申请日:2020-06-26

    Abstract: Apparatus and methods for three-dimensional pose estimation are disclosed herein. An example apparatus includes an image synchronizer to synchronize a first image generated by a first image capture device and a second image generated by a second image capture device, the first image and the second image including a subject; a two-dimensional pose detector to predict first positions of keypoints of the subject based on the first image and by executing a first neural network model to generate first two-dimensional data and predict second positions of the keypoints based on the second image and by executing the first neural network model to generate second two-dimensional data; and a three-dimensional pose calculator to generate a three-dimensional graphical model representing a pose of the subject in the first image and the second image based on the first two-dimensional data, the second two-dimensional data, and by executing a second neural network model.

    METHODS AND SYSTEMS FOR BUDGETED AND SIMPLIFIED TRAINING OF DEEP NEURAL NETWORKS

    公开(公告)号:US20220222492A1

    公开(公告)日:2022-07-14

    申请号:US17584216

    申请日:2022-01-25

    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.

Patent Agency Ranking