Joint training of neural networks using multi scale hard example mining

    公开(公告)号:US11120314B2

    公开(公告)日:2021-09-14

    申请号:US16491735

    申请日:2017-04-07

    Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.

    METHODS AND APPARATUS FOR ENHANCING A BINARY WEIGHT NEURAL NETWORK USING A DEPENDENCY TREE

    公开(公告)号:US20200167654A1

    公开(公告)日:2020-05-28

    申请号:US16615097

    申请日:2018-05-23

    Abstract: Methods and apparatus are disclosed for enhancing a binary weight neural network using a dependency tree. A method of enhancing a convolutional neural network (CNN) having binary weights includes constructing a tree for obtained binary tensors, the tree having a plurality of nodes beginning with a root node in each layer of the CNN. A convolution is calculated of an input feature map with an input binary tensor at the root node of the tree. A next node is searched from the root node of the tree and a convolution is calculated at the next node using a previous convolution result calculated at the root node of the tree. The searching of a next node from root node is repeated for all nodes from the root node of the tree, and a convolution is calculated at each next node using a previous convolution result.

    LOSS-ERROR-AWARE QUANTIZATION OF A LOW-BIT NEURAL NETWORK

    公开(公告)号:US20210019630A1

    公开(公告)日:2021-01-21

    申请号:US16982441

    申请日:2018-07-26

    Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.

Patent Agency Ranking