Abstract:
An apparatus of operating a computational network is configured to determine a low-rank approximation for one or more layers of the computational network based at least in part on a set of residual targets. A set of candidate rank vectors corresponding to the set of residual targets may be determined. Each of the candidate rank vectors may be evaluated using an objective function. A candidate rank vector may be selected and used to determine the low rank approximation. The computational network may be compressed based on the low-rank approximation. In turn the computational network may be operated using the one or more compressed layers.
Abstract:
A method for improving neural dynamics includes obtaining prototypical neuron dynamics. The method also includes modifying parameters of a neuron model so that the neuron model matches the prototypical neuron dynamics. The neuron dynamics comprise membrane voltages and/or spike timing.
Abstract:
A method of generating executable code for a target platform in a neural network includes receiving a spiking neural network description. The method also includes receiving platform-specific instructions for one or more target platforms. Further, the method includes, generating executable code for the target platform(s) based on the platform-specific instructions and the network description.
Abstract:
Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems.
Abstract:
Certain aspects of the present disclosure support efficient implementation of common neuron models. In an aspect, a first memory layout can be allocated for parameters and state variables of instances of a first neuron model, and a second memory layout different from the first memory layout can be allocated for parameters and state variables of instances of a second neuron model having a different complexity than the first neuron model.
Abstract:
A method for dynamically modifying synaptic delays in a neural network includes initializing a delay parameter and operating the neural network. The method further includes dynamically updating the delay parameter based on a program which is based on a statement including the delay parameter.
Abstract:
A method of online training of a classifier includes determining a distance from one or more feature vectors of an object to a first predetermined decision boundary established during off-line training for the classifier. The method also includes updating a decision rule as a function of the distance. The method further includes classifying a future example based on the updated decision rule.
Abstract:
A method for dynamically setting a neuron value processes a data structure including a set of parameters for a neuron model and determines a number of segments defined in the set of parameters. The method also includes determining a number of neuron types defined in the set of parameters and determining at least one boundary for a first segment.