Abstract:
A method of online training of a classifier includes determining a distance from one or more feature vectors of an object to a first predetermined decision boundary established during off-line training for the classifier. The method also includes updating a decision rule as a function of the distance. The method further includes classifying a future example based on the updated decision rule.
Abstract:
A method of generating a classifier model includes distributing a common feature model to two or more users. Multiple classifiers are trained on top of the common feature model. The method further includes distributing a first classifier of the multiple classifiers to a first user and a second classifier of the multiple classifiers to a second user.
Abstract:
A method of learning a model includes receiving model updates from one or more users. The method also includes computing an updated model based on a previous model and the model updates. The method further includes transmitting data related to a subset of the updated model to the a user(s) based on the updated model.
Abstract:
A method of distributed computation includes computing a first set of results in a first computational chain with a first population of processing nodes and passing the first set of results to a second population of processing nodes. The method also includes entering a first rest state with the first population of processing nodes after passing the first set of results and computing a second set of results in the first computational chain with the second population of processing nodes based on the first set of results. The method further includes passing the second set of results to the first population of processing nodes, entering a second rest state with the second population of processing nodes after passing the second set of results and orchestrating the first computational chain.
Abstract:
Certain aspects of the present disclosure support efficient implementation of common neuron models. In an aspect, a first memory layout can be allocated for parameters and state variables of instances of a first neuron model, and a second memory layout different from the first memory layout can be allocated for parameters and state variables of instances of a second neuron model having a different complexity than the first neuron model.
Abstract:
Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems.
Abstract:
A method for dynamically setting a neuron value processes a data structure including a set of parameters for a neuron model and determines a number of segments defined in the set of parameters. The method also includes determining a number of neuron types defined in the set of parameters and determining at least one boundary for a first segment.