Methods and apparatus for spiking neural computation

    公开(公告)号:US09367797B2

    公开(公告)日:2016-06-14

    申请号:US13369080

    申请日:2012-02-08

    IPC分类号: G06N3/04 G06N3/08

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    Methods and apparatus for spiking neural computation
    42.
    发明授权
    Methods and apparatus for spiking neural computation 有权
    刺激神经计算的方法和装置

    公开(公告)号:US09111225B2

    公开(公告)日:2015-08-18

    申请号:US13368994

    申请日:2012-02-08

    IPC分类号: G06N3/08 G06N3/04

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    摘要翻译: 本公开的某些方面提供了用于通用线性系统的神经计算的方法和装置。 一个示例方面是在尖峰之间的相对定时中编码信息的神经元模型。 然而,突触重量是不必要的。 换句话说,连接可能存在(显着突触)或不存在(不重要或不存在突触)。 本公开的某些方面使用二进制值输入和输出,并且不需要突触后过滤。 然而,某些方面可能涉及对连接延迟的建模(例如树枝状延迟)。 可以使用单个神经元模型来计算任何通用线性变换x = AX + BU到任意精度。 该神经元模型还可能能够学习,例如学习输入延迟(例如,对应于缩放值)以实现目标输出延迟(或输出值)。 学习也可用于确定因果输入的逻辑关系。

    Method and apparatus of primary visual cortex simple cell training and operation
    44.
    发明授权
    Method and apparatus of primary visual cortex simple cell training and operation 有权
    初级视皮层简单细胞培养和手术的方法和装置

    公开(公告)号:US08843426B2

    公开(公告)日:2014-09-23

    申请号:US13115158

    申请日:2011-05-25

    申请人: Vladimir Aparin

    发明人: Vladimir Aparin

    IPC分类号: G06N3/063 G06N3/04

    CPC分类号: G06N3/063 G06N3/0445

    摘要: Certain aspects of the present disclosure present a technique for primary visual cortex (V1) cell training and operation. The present disclosure proposes a model structure of V1 cells and retinal ganglion cells (RGCs), and an efficient method of training connectivity between these two layers of cells such that the proposed method leads to an autonomous formation of feature detectors within the V1 layer. The proposed approach enables a hardware-efficient and biological-plausible implementation of image recognition and motion detection systems.

    摘要翻译: 本公开的某些方面提供了用于初级视皮层(V1)细胞训练和操作的技术。 本公开提出了V1细胞和视网膜神经节细胞(RGC)的模型结构,以及训练这两层细胞之间的连通性的有效方法,使得所提出的方法导致在V1层内自主形成特征检测器。 所提出的方法使得能够实现图像识别和运动检测系统的硬件效率和生物可靠性。

    Methods and systems for replaceable synaptic weight storage in neuro-processors
    46.
    发明授权
    Methods and systems for replaceable synaptic weight storage in neuro-processors 有权
    神经处理器中替代突触体重储存的方法和系统

    公开(公告)号:US08676734B2

    公开(公告)日:2014-03-18

    申请号:US12831484

    申请日:2010-07-07

    申请人: Vladimir Aparin

    发明人: Vladimir Aparin

    IPC分类号: G06F15/18 G06N3/08 G06N3/00

    CPC分类号: G06N3/063

    摘要: Certain embodiments of the present disclosure support techniques for storing synaptic weights separately from a neuro-processor chip into a replaceable storage. The replaceable synaptic memory gives a unique functionality to the neuro-processor and improves its flexibility for supporting a large variety of applications. In addition, the replaceable synaptic storage can provide more choices for the type of memory used, and might decrease the area and implementation cost of the overall neuro-processor chip.

    摘要翻译: 本公开的某些实施例支持用于将突触重量与神经处理器芯片分开存储到可更换存储器中的技术。 可替换的突触记忆能够为神经处理器提供独特的功能,并提高其支持各种应用的灵活性。 此外,可替换的突触存储可以为所使用的存储器类型提供更多选择,并且可能降低整个神经处理器芯片的面积和实施成本。

    Method and apparatus for unsupervised training of input synapses of primary visual cortex simple cells and other neural circuits
    47.
    发明授权
    Method and apparatus for unsupervised training of input synapses of primary visual cortex simple cells and other neural circuits 有权
    用于无监督训练初级视皮层简单细胞和其他神经回路输入突触的方法和装置

    公开(公告)号:US08583577B2

    公开(公告)日:2013-11-12

    申请号:US13115154

    申请日:2011-05-25

    申请人: Vladimir Aparin

    发明人: Vladimir Aparin

    CPC分类号: G06N3/063

    摘要: Certain aspects of the present disclosure present a technique for unsupervised training of input synapses of primary visual cortex (V1) simple cells and other neural circuits. The proposed unsupervised training method utilizes simple neuron models for both Retinal Ganglion Cell (RGC) and V1 layers. The model simply adds the weighted inputs of each cell, wherein the inputs can have positive or negative values. The resulting weighted sums of inputs represent activations that can also be positive or negative. In an aspect of the present disclosure, the weights of each V1 cell can be adjusted depending on a sign of corresponding RGC output and a sign of activation of that V1 cell in the direction of increasing the absolute value of the activation. The RGC-to-V1 weights can be positive and negative for modeling ON and OFF RGCs, respectively.

    摘要翻译: 本公开的某些方面提供了用于无监督训练初级视皮层(V1)简单细胞和其他神经电路的输入突触的技术。 提出的无监督训练方法使用简单的神经元模型用于视网膜神经节细胞(RGC)和V1层。 该模型简单地添加每个单元的加权输入,其中输入可以具有正值或负值。 所得的加权输入总和代表也可以是正或负的激活。 在本公开的一方面,可以根据对应的RGC输出的符号和在增加激活的绝对值的方向激活该V1小区的符号来调整每个V1小区的权重。 RGC-to-V1权重可以分别为ON和OFF RGC的正负值。

    METHOD AND APPARATUS FOR A LOCAL COMPETITIVE LEARNING RULE THAT LEADS TO SPARSE CONNECTIVITY
    48.
    发明申请
    METHOD AND APPARATUS FOR A LOCAL COMPETITIVE LEARNING RULE THAT LEADS TO SPARSE CONNECTIVITY 有权
    针对当地竞争性学习的方法和设备引导疏漏连接

    公开(公告)号:US20120330870A1

    公开(公告)日:2012-12-27

    申请号:US13166269

    申请日:2011-06-22

    申请人: Vladimir Aparin

    发明人: Vladimir Aparin

    IPC分类号: G06N3/08

    摘要: Certain aspects of the present disclosure support a local competitive learning rule applied in a computational network that leads to sparse connectivity among processing units of the network. The present disclosure provides a modification to the Oja learning rule, modifying the constraint on the sum of squared weights in the Oja rule. This constraining can be intrinsic and local as opposed to the commonly used multiplicative and subtractive normalizations, which are explicit and require the knowledge of all input weights of a processing unit to update each one of them individually. The presented rule provides convergence to a weight vector that is sparser (i.e., has more zero elements) than the weight vector learned by the original Oja rule. Such sparse connectivity can lead to a higher selectivity of processing units to specific features, and it may require less memory to store the network configuration and less energy to operate it.

    摘要翻译: 本公开的某些方面支持在计算网络中应用的本地竞争性学习规则,其导致网络的处理单元之间的稀疏连接。 本公开提供了对Oja学习规则的修改,修改了Oja规则中的平方权重之和的约束。 这种约束可以是内在的和局部的,而不是通常使用的乘法和减法规范化,它们是显式的,并且需要知道处理单元的所有输入权重以分别更新它们中的每一个。 所呈现的规则提供了与由原始Oja规则学习的权重向量相比更加稀疏(即,具有更多零个元素)的权重向量的收敛。 这种稀疏连接可以导致处理单元对特定特征的更高选择性,并且可能需要更少的存储器来存储网络配置和较少的能量来操作它。

    METHODS AND SYSTEMS FOR NEURAL PROCESSOR TRAINING BY ENCOURAGEMENT OF CORRECT OUTPUT
    50.
    发明申请
    METHODS AND SYSTEMS FOR NEURAL PROCESSOR TRAINING BY ENCOURAGEMENT OF CORRECT OUTPUT 有权
    通过对正确输出进行神经处理器训练的方法和系统

    公开(公告)号:US20120011089A1

    公开(公告)日:2012-01-12

    申请号:US12832399

    申请日:2010-07-08

    IPC分类号: G06N3/08 G06N3/063

    CPC分类号: G06N3/063 G06N3/049

    摘要: Certain embodiments of the present disclosure support implementation of a neural processor with synaptic weights, wherein training of the synapse weights is based on encouraging a specific output neuron to generate a spike. The implemented neural processor can be applied for classification of images and other patterns.

    摘要翻译: 本公开的某些实施例支持具有突触权重的神经处理器的实现,其中突触权重的训练基于鼓励特定输出神经元以产生尖峰。 实施的神经处理器可以应用于图像和其他图案的分类。