Abstract:
Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems.
Abstract:
Certain aspects of the present disclosure relate to methods and apparatus for neuro-simulation with a single two-dimensional device to track objects. The neuro-simulation may report a point of interest in an image that is provided by the device. The device may center on the point of interest using one or more actuators. The simulation mechanism may input pixels and output a plurality of angles to the actuators to adjust their direction.
Abstract:
Data compression systems, methods, and computer program products are disclosed. For each successive input word of an input stream, it is determined whether the input word matches an entry in a lookback table. The lookback table is updated in response to the input word. Input words may be of a number of data types, including zero runs and full or partial matches with an entry in the lookback table. A codeword is generated by entropy encoding a data type corresponding to the input word. The lookback table may be indexed by the position of the input word in the input stream.
Abstract:
In one embodiment, a depth-first deep convolutional network (DCN) having a first convolutional layer having a first first-layer kernel and adapted to convolve a first input and a second convolutional layer having a first second-layer kernel and adapted to convolve a second-layer input. A method for the DCN includes initiating convolution in the first convolution layer of the first input tensor with the first first-layer kernel to generate a value strip for the second input tensor and, prior to completion of the convolution in the first convolution layer, initiating convolution in the second convolution layer of the second input with the first second-layer kernel to generate a value strip for a third layer.
Abstract:
Data compression systems, methods, and computer program products are disclosed. For each successive input word of an input stream, it is determined whether the input word matches an entry in a lookback table. The lookback table is updated in response to the input word. Input words may be of a number of data types, including zero runs and full or partial matches with an entry in the lookback table. A codeword is generated by entropy encoding a data type corresponding to the input word. The lookback table may be indexed by the position of the input word in the input stream.