SYSTEM AND METHOD FOR A UNIFIED ARCHITECTURE MULTI-TASK DEEP LEARNING MACHINE FOR OBJECT RECOGNITION

    公开(公告)号:US20180307897A1

    公开(公告)日:2018-10-25

    申请号:US16024823

    申请日:2018-06-30

    Abstract: A system to recognize objects in an image includes an object detection network outputs a first hierarchical-calculated feature for a detected object. A face alignment regression network determines a regression loss for alignment parameters based on the first hierarchical-calculated feature. A detection box regression network determines a regression loss for detected boxes based on the first hierarchical-calculated feature. The object detection network further includes a weighted loss generator to generate a weighted loss for the first hierarchical-calculated feature, the regression loss for the alignment parameters and the regression loss of the detected boxes. A backpropagator backpropagates the generated weighted loss. A grouping network forms, based on the first hierarchical-calculated feature, the regression loss for the alignment parameters and the bounding box regression loss, at least one of a box grouping, an alignment parameter grouping, and a non-maximum suppression of the alignment parameters and the detected boxes.

    METHOD AND APPARATUS FOR NEURAL NETWORK QUANTIZATION

    公开(公告)号:US20180107925A1

    公开(公告)日:2018-04-19

    申请号:US15433531

    申请日:2017-02-15

    CPC classification number: G06N3/08 G06F17/16 G06N3/063 G06N3/082 G06N3/084

    Abstract: Apparatuses and methods of manufacturing same, systems, and methods for performing network parameter quantization in deep neural networks are described. In one aspect, diagonals of a second-order partial derivative matrix (a Hessian matrix) of a loss function of network parameters of a neural network are determined and then used to weight (Hessian-weighting) the network parameters as part of quantizing the network parameters. In another aspect, the neural network is trained using first and second moment estimates of gradients of the network parameters and then the second moment estimates are used to weight the network parameters as part of quantizing the network parameters. In yet another aspect, network parameter quantization is performed by using an entropy-constrained scalar quantization (ECSQ) iterative algorithm. In yet another aspect, network parameter quantization is performed by quantizing the network parameters of all layers of a deep neural network together at once.

    MULTI-EXPERT ADVERSARIAL REGULARIZATION FOR ROBUST AND DATA-EFFICIENT DEEP SUPERVISED LEARNING

    公开(公告)号:US20220301296A1

    公开(公告)日:2022-09-22

    申请号:US17674832

    申请日:2022-02-17

    Abstract: A system and a method to train a neural network are disclosed. A first image is weakly and strongly augmented. The first image, the weakly and strongly augmented first images are input into a feature extractor to obtain augmented features. Each weakly augmented first image is input to a corresponding first expert head to determine a supervised loss for each weakly augmented first image. Each strongly augmented first image is input to a corresponding second expert head to determine a diversity loss for each strongly augmented first image. The feature extractor is trained to minimize the supervised loss on weakly augmented first images and to minimize a multi-expert consensus loss on strongly augmented first images. Each first expert head is trained to minimize the supervised loss for each weakly augmented first image, and each second expert head is trained to minimize the diversity loss for each strongly augmented first image.

    SYSTEM AND METHOD FOR FEDERATED LEARNING USING WEIGHT ANONYMIZED FACTORIZATION

    公开(公告)号:US20210374608A1

    公开(公告)日:2021-12-02

    申请号:US17148557

    申请日:2021-01-13

    Abstract: A federated machine-learning system includes a global server and client devices. The server receives updates of weight factor dictionaries and factor strengths vectors from the clients, and generates a globally updated weight factor dictionary and a globally updated factor strengths vector. A client device selects a group of parameters from a global group of parameters, and trains a model using a dataset of the client device and the group of selected parameters. The client device sends to the server a client-updated weight factor dictionary and a client-updated factor strengths vector. The client device receives the globally updated weight factor dictionary and the globally updated factor strengths vector, and retrains the model using the dataset of the client device, the group of parameters selected by the client device, and the globally updated weight factor dictionary and the globally updated factor strengths vector.

Patent Agency Ranking