METHOD AND APPARATUS FOR DATA-FREE POST-TRAINING NETWORK QUANTIZATION AND GENERATING SYNTHETIC DATA BASED ON A PRE-TRAINED MACHINE LEARNING MODEL

    公开(公告)号:US20220083855A1

    公开(公告)日:2022-03-17

    申请号:US17096734

    申请日:2020-11-12

    Abstract: A method for training a generator, by a generator training system including a processor and memory, includes: extracting training statistical characteristics from a batch normalization layer of a pre-trained model, the training statistical characteristics including a training mean μ and a training variance σ2; initializing a generator configured with generator parameters; generating a batch of synthetic data using the generator; supplying the batch of synthetic data to the pre-trained model; measuring statistical characteristics of activations at the batch normalization layer and at the output of the pre-trained model in response to the batch of synthetic data, the statistical characteristics including a measured mean {circumflex over (μ)}ψ and a measured variance {circumflex over (σ)}ψ2; computing a training loss in accordance with a loss function Lψ based on μ, σ2, {circumflex over (μ)}ψ, and {circumflex over (σ)}ψ2; and iteratively updating the generator parameters in accordance with the training loss until a training completion condition is met to compute the generator.

    METHOD AND APPARATUS FOR CONTINUAL FEW-SHOT LEARNING WITHOUT FORGETTING

    公开(公告)号:US20220067582A1

    公开(公告)日:2022-03-03

    申请号:US17156126

    申请日:2021-01-22

    Abstract: Methods and apparatuses are provided for continual few-shot learning. A model for a base task is generated with base classification weights for base classes of the base task. A series of novel tasks is sequentially received. Upon receiving each novel task in the series of novel tasks, the model is updated with novel classification weights for novel classes of the respective novel task. The novel classification weights are generated by a weight generator based on one or more of the base classification weights and, when one or more other novel tasks in the series are previously received, one or more other novel classification weights for novel classes of the one or more other novel tasks. Additionally, for each novel task, a first set of samples of the respective novel task are classified into the novel classes using the updated model.

    METHOD AND APPARATUS FOR REDUCING COMPUTATIONAL COMPLEXITY OF CONVOLUTIONAL NEURAL NETWORKS

    公开(公告)号:US20210406647A1

    公开(公告)日:2021-12-30

    申请号:US17473813

    申请日:2021-09-13

    Abstract: A convolutional neural network (CNN) system for generating a classification for an input image is presented. The CNN system comprises circuitry running on clock cycles and configured to compute a product of two received values, and at least one non-transitory computer-readable medium that stores instructions for the circuitry to derive a feature map based on at least the input image; puncture at least one selection among the feature map and a kernel by setting the value of an element at an index of the at least one selection to zero and cyclic shifting a puncture pattern to achieve a 1/d reduction in number of clock cycles, where d is an integer and puncture interval value>1. The feature map is convolved with the kernel to generate an output, and a classification of the input image is generated based on the output.

    SYSTEM AND METHOD FOR DEEP MACHINE LEARNING FOR COMPUTER VISION APPLICATIONS

    公开(公告)号:US20210124985A1

    公开(公告)日:2021-04-29

    申请号:US16872199

    申请日:2020-05-11

    Abstract: A computer vision (CV) training system, includes: a supervised learning system to estimate a supervision output from one or more input images according to a target CV application, and to determine a supervised loss according to the supervision output and a ground-truth of the supervision output; an unsupervised learning system to determine an unsupervised loss according to the supervision output and the one or more input images; a weakly supervised learning system to determine a weakly supervised loss according to the supervision output and a weak label corresponding to the one or more input images; and a joint optimizer to concurrently optimize the supervised loss, the unsupervised loss, and the weakly supervised loss.

Patent Agency Ranking